Next Article in Journal
Roll Control of Morphing Aircraft with Synthetic Jet Actuators at a High Angle of Attack
Previous Article in Journal
Transitioning Broadcast to Cloud
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Obstacle Avoidance in a Three-Dimensional Dynamic Environment Based on Fuzzy Dynamic Windows

1
CAS Key Laboratory of On-Orbit Manufacturing and Integration for Space Optics System, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(2), 504; https://doi.org/10.3390/app11020504
Submission received: 25 November 2020 / Revised: 23 December 2020 / Accepted: 1 January 2021 / Published: 6 January 2021
(This article belongs to the Section Robotics and Automation)

Abstract

:
This paper presents a real-time path planning approach for controlling the motion of space-based robots. The algorithm can plan three-dimensional trajectories for agents in a complex environment which includes numerous static and dynamic obstacles, path constraints, and/or performance constraints. This approach is extended based on the dynamic window approach (DWA). As the classic reactive method for obstacle avoidance, DWA uses an optimized function to select the best motion command. The original DWA optimization function consists of three weight terms. Changing the weights of these terms will change the behavior of the algorithm. In this paper, to improve the evaluation ability of the optimization function and the robot’s ability to adapt to the environment, a new optimization function is designed and combined with fuzzy logic to adjust the weights of each parameter of the optimization function. Given that DWA has the defect of local minima, which makes the robot hard to escape U-shaped obstacles, a dual dynamic window method and local goals are adopted in this article to help the robot escape local minima. By comparison, the proposed method is superior to traditional DWA and fuzzy DWA (F_DWA) in terms of computational efficiency, smoothness and security.

1. Introduction

It is expected that space autonomous robotics will be used to complete complex and dangerous tasks in space as space technology develops [1]. As a robot completes a mission, it needs to plan a safe trajectory from a starting point to a target point. Collisions and path optimization are the main issues relating to the planning of the trajectory (i.e., navigation). Currently, navigation algorithms fall into two categories: global planning algorithms and local navigation algorithms. Previous studies have also combined the above two types of algorithms to achieve better navigation, but such a combination depends on the robot’s autonomy and environmental factors [2].
Some notable global navigation algorithms avoid the collision problem by building a global environment map: e.g., A* [3], D* [4], FD* [5], and RRT [6]. When planning a path on a map, the optimization goal is usually the shortest path or the lowest energy use. However, considering that in the planning process, the speed, and accuracy of graph-based search methods depend on the granularity of the search space, those approaches are not suitable for real-time application.
To improve real-time performance, some studies have proposed local navigation algorithms. Currently, the simplest local navigation algorithms are bug-based algorithms, e.g., bug1, bug2 [7], and tangent bug [8]. When the robot encounters an obstacle, the robot will move along the boundary of the obstacle until it avoids the obstacle. The classical reactive navigation methods include the VFH (vector field histogram) [9] and its extended versions VFH+ [10] and VFH* [11], which are all based on the virtual vector field method. The moving direction of histogram methods is given by dividing the surrounding environment into different angular sectors of the polar histogram and the next direction of the robot’s movement is obtained according to the proximity of an obstacle. Another classical reactive navigation method is the velocity space approach. This approach uses the evaluation function such as safety and smoothness to calculate the next movement direction. Some well-known velocity space algorithms are dynamic-window approach (DWA) [12,13] and curvature velocity method (CVM) [14]. As for the nonlinear problem of moving obstacles, Seder, M. and Petrovic, I. [2] proposed an extension of the DWA algorithm (i.e., the time-varying dynamic-window algorithm), where the moving obstacles are modeled as moving cells in a gridded map. Considering the time factor in the process of obstacle avoidance, Molinos, E. J. et al. proposed DW4DO and DW4DOT methods based on CVM (curvature velocity method) to plan a safer and longer obstacle avoidance path [15], and applied the methods in a dynamic environment. However, the main drawback of the local navigation algorithms is the local minima, especially in the vase of U-shaped obstacles. One approach of avoiding the local minima problem is to select a local target of the robot’s surroundings and calculate the direction of the next step. Lane-Curvature Method (LCM) [16] and Beam-Curvature Methods (BCM) [17] algorithms are representatives of this approach and are based on the CVM algorithm. The LCM algorithm obtains the local target by dividing the channel, whereas the BCM obtains the local target by dividing the sector. Moon, J. et al. proposed a hybrid dynamic window approach that can avoid the agent falling into the local minima, which is superior to DWA [18]. Yu, X. Y. et al. proposed a dynamic window method with virtual goal in dynamic environments, which can make the robot escape the concave trap [19]. In addition to these algorithms, there are evolutionary algorithms that are based on neural networks or fuzzy logic [20,21,22]. Tarokh, M. proposed a hybrid intelligent approach that combines fuzzy logic and genetic algorithm to enable robots to quickly find an optimal path that avoids rough areas, which effectively solves the path planning of highly mobile robots in rough environments [23]. Chen, L. et al. proposed a conditional deep Q-network with fuzzy logic, which aims to handle the issues of directional planning in end-to-end autonomous driving systems and the independence of different motion commands. This approach has better learning performance and driving stability performance, but it does not take into account obstacles such as cars and pedestrians [24].
Whether it is a graph-based search method or a local navigation algorithm, they mostly aim at a flat environment. For path planning in a space environment, popular approaches are mainly to integrate obstacles, speeds, and path lengths into constraints and generate trajectories by curve parameterization [25,26]. Most of the numerical methods use path point parameterization to define splines, polynomials, and so on. However, for nonlinear problems, these parameterization algorithms require a lot of computing resources. Without further assumptions or simplifications, these methods are not suitable for solving the problem of dynamic obstacles. Then, based on a local parameterized guidance vector field, Marchidan, A. et al. proposed a method that can generate a constant velocity vector field by the decomposition of unmanned aerial vehicle (UAV) kinematics into normal and tangential components with respect to the obstacle boundary [27]. This method can effectively avoid static obstacles and dynamic obstacles. However, it also has one limitation; that is, it assumes that agents are at a constant height and hence simplifies the three-dimensional (3D) problem to a two-dimensional (2D) problem. Due to the different power modes provided by the thrusters, the problems of the UAV and the space agent in obstacle avoidance and trajectory planning are not completely the same. However, in terms of multi-objective path planning [28,29], collaborative path planning [30,31] and cluster motion [32], they have similar path planning problems. Based on the Legendre polynomial method, Chamitoff, G. E. et al. proposed the ASTRO (Admissible subspace trajectory optimizer) method which transforms the complex cost function into a simple convex form and applied it to cope with the problem of optimizing real-time three-dimensional (3D) trajectory [33]. Nonetheless, when the constraints are complex, the numerical method still has the problem of local minima in the optimization process.
To address the navigation problem in the three-dimensional environment, this paper proposes a fuzzy dynamic window approach (DF_DWA). According to the traditional DWA, it is believed that the values of 0.8, 0.1, and 0.1 for the parameters of azimuth, obstacle clearance, and speed, respectively can provide good results in some situations [2]. Nevertheless, as the environment becomes more complex, this set of weights is not suitable in all situations. Different working environments using the same weight coefficient will result in obstacle avoidance failure or make the machine stop working [15]. Zhang, H. et al. suggested using fuzzy logic to update the weight of DWA objective function by analyzing the distribution of eight obstacles [34]. Abubakr, O. A. et al. simplified the distribution of obstacles to three situations, thereby reducing the number of fuzzy logic rules, and improving computational efficiency [35]. However, the two methods mentioned update the weight of the objective function only by the distribution of obstacles, without considering the distance of the obstacles. As a result, the robot can easily fall into U-shaped obstacles or the local minima [36,37]. Moreover, the existing objective function does not consider the pointing problem when the agent approaches the target. In this article, by adding the term of the distance target, a new objective function is established to eliminate the influence of angle orientation. The weight parameters are adjusted by fuzzy logic so that the space robot can adapt to environmental changes. The dual dynamic window method and local goal method are used to avoid the agent trapping in the local minima and U-shaped obstacles.
The paper is organized as follows: Section 2 describes the collision-avoidance problem. Section 3 designs fuzzy rules and describes the dynamic-window approaches. The effectiveness of the method is analyzed by simulation in Section 4. Finally, conclusions are drawn from the results of the study in Section 5.

2. Formulation of the Collision-Avoidance Problem

In this paper, we mainly focus on the 3D real-time trajectory planning of the kind of space-based robots, such as SPHERES [38] and astronaut assistant robots [39]. The agent can work in a space station or specific working range in which the flying distance is much shorter than the radius of the orbit. In addition, the algorithm can also be extended to other types of agent 3D trajectory planning. This section provides the mathematical model of the motion and the attitude of the spatial agent. In a 3D environment, the attitude of the agent is represented by the pitch angle θ, yaw angle ψ, and roll angle ϕ, as shown in Figure 1. Under reference coordinate system (Xf, Yf, Zf), the equation of the agent’s motion is written as:
X f ( t + 1 ) Y f ( t + 1 ) Z f ( t + 1 ) = X f ( t ) Y f ( t ) Z f ( t ) + Δ t * R b f v x b v y b v z b
where the Δt is the sampling time. The conversion between the reference coordinate system oXfYfZf and the body coordinate system oXbYbZb is in the order of 321. Therefore, the transformation matrix from the body coordinate system to the reference coordinate system is:
R b f = c ψ c θ s ψ c ϕ + c ψ s θ s ϕ s ψ s ϕ + c ψ s θ c ϕ s ψ c θ c ψ c ϕ + s ψ s θ s ϕ c ψ s ϕ + s ψ s θ c ϕ s θ c θ s ϕ c θ c ϕ
where c and s are the cosine and sine, respectively. The relationship between the angular velocity component and Euler angle is:
R ( ξ ) = 1 0 s θ 0 c ϕ s ϕ c θ 0 s ϕ c ϕ c θ
ξ ˙ = R ( ξ ) 1 ω
where ξ = [ϕ, θ, ψ]T, ω = [ωx, ωy, ωz]T, ωx, ωy and ωz are the components of the angular velocity ω in the axis of the body coordinate. The equation of the attitude of the agent is written as:
ϕ ( t + 1 ) θ ( t + 1 ) ψ ( t + 1 ) = ϕ ( t ) θ ( t ) ψ ( t ) + ξ ˙ Δ t
The trajectory of the agent is generated by Equations (1) and (5). Due to the limitation of the sensor, the activity range that the agent can perceive in a certain time is limited; for example, the effective detection distance of a vehicle-mounted lidar is within 6–8 m. Taking the geometric center of the agent as the origin, the range can be obtained by:
S ( t ) = { p 3 : p ( t ) p r s }
where P = [x, y, z] is the position of the agent, P(t) is the point within the detection range of the agent. rs is the detection radius of the sensor. Within this range, the space occupied by obstacles can be expressed as:
ο ( t ) = { p 3 : p ( t ) p o ( t ) r o }
where Po(t) = [xo(t), yo(t), zo(t)] is the position of the obstacles, ro is the radius of the obstacles. In this paper, obstacles and agents are assumed to be spherical with the radius of Robstacle and Ragent. Users can also refer to the reference [40] to configure agents and obstacles by different boundary constraints. In addition, the obstacle information can be obtained by the sensors [41]. All the admissible positions of the agent can be expressed as:
Ω ( t ) = S ( t ) o ( t ) + S b ( t )
where the Sb(t) is an added safety zone that accounts for measurement and process uncertainty. Because of the limitation of sensors, new obstacles will appear in the search area, and the environment within the search range is changing, which requires the algorithm itself to be able to adapt to the change. In addition, some obstacles may also be dynamic, so the admissible range of the agent is a time-varying area. In response to these problems, we propose a fuzzy dynamic window method, which will be discussed in Section 3.

3. Proposed Dynamic Window Approach

DWA is one of the commonly used methods for local obstacle avoidance. During an iteration, the algorithm can calculate all feasible speed groups for the next iteration. The optimal motion combination is then selected by the evaluation function. The algorithm adopted in the present paper differs from the original algorithm as described below:
(1)
DWA is extended to a 3D dynamic environment.
(2)
New evaluation items are added to the original evaluation function and therefore a new evaluation function is established. Fuzzy logic is also introduced to adjust the weight of each evaluation item according to the working conditions. The distance from the agent to the goal and the distance between the agent and the obstacle are the basis for assessing the adjustment.
(3)
Two different velocity windows are evaluated at the same time. To ensure the safety of the trajectory, the maximum velocity window is adopted to calculate the braking distance and the distance to the obstacle.
(4)
Local goals are used to avoid large obstacles or U-shaped obstacles.
Figure 2 summarizes the algorithm workflow.

3.1. The Two Velocity Windows

Due to the acceleration and velocity limitations, the agent cannot move as required. Meanwhile, the high rate iteration of the algorithm or obstacles may cause the agent speed to be extremely slow, which may cause the speed group to be ignored and run errors. Therefore, two-level velocity windows are used to deal with this problem, as shown in Figure 3.
Figure 3a shows the two-level velocity windows. The black cube represents the agent and the blue cube represents the ‘local window’. The local window is the reachable velocity by the agent in the next iteration, which is limited by the agent acceleration and maximum velocity. The light cube represents the window of the maximum speed that the agent can reach. The maximum speed window can ignore the kinematic constraints of the agent. In Figure 3b, according to the elevation and azimuth angles, the agent’s motion is divided into different motion planes to form a 3D fan-shaped area. The dense part of the 3D sector represents that the agent can reach predicted positions if the agent maintains its velocity for 3 s. The dark-blue area represents the positions where the agent can reach at the maximum speed. The red rectangular prism represents the position of the agent selected by the evaluation function.
By setting two-level velocity windows, when the agent iteration speed is extremely slow, the best trajectory between the maximum speed is first determined. Then the agent moves to near the optimal position in the next iteration. Based on this, we can further analyze how the agent determines the best local goal when it encounters a large obstacle or a U-shaped obstacle.

3.2. The Candidate Local Goal

As shown in Figure 4, the candidates for local goal points are generated in a safe fan-shaped area where the agent can fly, and the optimal value of the candidate local goal point is then detected.
The candidates for local goal points are defined by:
g o a l l o c a l ( i ) = ( X ( t ) , ϕ i ( t ) , θ i ( t ) ) + R s ( η , j π m R s , k π m R s )
where goallocal(i) is the candidates for local points, ϕi(t) is the azimuth of the candidates for local points. θi(t) is the elevation of the candidates for local points, X(t) is the position of the agent, and RS is the safe separation of the agent and obstacles. j, k and m are the index and number of the candidate local goals. The larger the value of m is, the more local goal points can be selected, and the better the optimization result of the local goal can get. RS is defined by:
R s = R r + r 0 + R b r a k i n g ( t )
R b r a k i n g ( t ) = v t 2 a max
η = cos ( ϕ i ) c o s ( θ i ) sin ( ϕ i ) c o s ( θ i ) sin ( θ i )
where Rr is the agent radius, Rbraking is the braking distance of the agent, vt is the current velocity of the agent, and amax is the maximum acceleration that the agent can achieve.

3.2.1. Distance-to-the-Virtual Goal Term

This evaluation term is used to select the candidate virtual local goal point closest to the real goal.
d i s t 1 = 1 d i s t ( P g o a l l o c a l P g o a l ) d 1 max
where Pgoal is the position of the real goal, Pgoallocal is the position of the virtual local goal, d1max is the distance between the real goal and the current position of the agent.
R s 1 = R r + R b r a k i n g ( t )
A safe area is established between the agent and the local target, and the safe area is defined as a cuboid area, as shown in Figure 5. If there exist obstacles in the safe area between the agent and the candidate local goals, the candidate local goal in the cuboid area will be deleted from goallocal(i).

3.2.2. Orientation-to-the-Local Goal Term

This evaluation term is used to select the candidate local goal point with the smallest angle between the real goal and the local goal points.
h e a d i n g l o c a l = 1 a r c c o s ( P g o a l l o c a l P g o a l ) π

3.2.3. Distance-to-the-Obstacle Term

This evaluation term is used to select a local goal point far away from obstacles.
d i s t 2 = d i s t ( P g o a l l o c a l P o b s t a c l e ) i = 1 m d i s t ( P g o a l l o c a l P o b s t a c l e )
where Pobstacle is the position of obstacles detected by the agent, m is the number of obstacles.
The optimal local goal is selected by the evaluation function Equation (17):
g e n d = μ dis t 1 d i s t 1 + μ h e a d 1 h e a d i n g l o c a l + μ dis t 2 d i s t 2
where μdist1, μdist2 and μhead1 are the weights of the functions. The goallocal (i) point corresponding to the larger set of results in gend is used as the local goal.

3.3. The Evaluation Function for DF_DWA

Once the dynamic window is set, each set of speeds (linear and angular velocities) is evaluated to select the best combination in the iteration. This paper uses the speed, goal distance, goal orientation, and obstacle clearance as components of the evaluation function. The detailed content of the algorithm DF_DWA is given in Algorithm 1.
Algorithm 1. DF_DWA.
1: Procedure DF_DWA(goal, x_initial = [x0, y0, z0], parameter_initial, Kinematic, p_e, r0)
2:  i←1, ◊ iteration counter
3:   for I = 1:5000
4:    If norm(x(i)-goal)>p_e ◊ p_e is the allowable position error
5.     If norm(x(i)-obstacle(i))<= r0
6:     break;
7:       If (there are large obstacles or U-shaped obstacles) then
8:       gend←(Equation (9), Equation (10), Equation (14), Equation (15), Equation (16));
9:       goal_r←goal && goal←V_goal;
10:        If norm(x(i)- goal)<= p_e;
11:        goal←goal_r;
12:        end if
13:       end if
14:   EvalParameter←evalfis([distance(x(i)-goal), distance(x(i)-obstacle(i))]; ◊ Calculate parameters by fuzzy rule.
15:    [Vmin (i), Vmax (i), ωmin (i), ωmax (i)]←DynamicWindow(x(i), Kinematic);
16:    f (V(t), ω(t),distgoal(t), distobstcles(t), heading(yaw(t), pitch(t)), Evalparameter)←Equation (23);
17:    ind(i)←max(f); ◊ Find the velocity group with the highest evaluation function value
18:    [u(i)]←Kinematic(ind,1:3); ◊ Finding the optimal speed group
19:    x(i + 1)←P(x,u); ◊ Iteration
20:  else If norm(x(i)-goal)<p_e
21:     break;
22:    end if
23:  end
24: end Procedure

3.3.1. Speed Term

The speed term is used to evaluate whether the agent advances at the optimal speed. There are two scenarios in the process of an agent flying toward a goal. One is that the agent faces the goal while the other is that the agent faces away from the goal. If the goal is in front of the robot, the reward value of this term can be gotten as follow:
s p e e d ( v t ) = v t v max
When the agent faces away from the target, it only needs to rotate motion to save energy:
s p e e d ( v t , ω t ) = ( v t v max ) * α + ( ω t ω max ) * β
where α and β are the proportional parameters of the linear velocity and angular velocity. The value of α and β is between 0 and 1. The angular velocity has two maximum values: (1) vt = 0, ωt = ωmax; (2) vt = 0, ωt = −ωmax. As the linear velocity increases, the rotation speed will decrease until vt = vmax, ωt = 0.

3.3.2. Distance-to-the-Goal Term

This term prizes the trajectory that makes the agent move towards the goal. The distance from the starting point to the goal is defined as the maximum distance dmax. Also, the reward function is normalized between 0 and 1. Therefore,
d i s t g o a l ( v t , ω t ) = 1 d g o a l ( t ) d max
where dgoal is the distance from the position of the agent to the goal, and dmax is the distance from the starting point to the goal.

3.3.3. Orientation-to-the-Goal Term

This heading term prizes the curvature arcs that head the agent towards the goal. The direction angle between the end of the trajectory and the goal is compared with the azimuth of the agent (i.e., ϕerror, θerror). As shown in Figure 6, the ϕerror and θerror can be given by:
ϕ e r r o r = ϕ ϕ
θ e r r o r = θ θ
The reward function is given by 180-ϕerror, 180-θerror and normalized between 0 and 1.
h e a d i n g p i t c h ( v t , ω t ) = 1 | arctan 2 ( y g o a l y ( t ) , x g o a l x ( t ) ) t h p i t c h ( t ) | π
h e a d i n g y a w ( v t , ω t ) = 1 ( | arctan 2 ( z g o a l z ( t ) , s q r t ( ( y g o a l y ( t ) ) 2 + ( x g o a l x ( t ) ) 2 ) ) | π t h y a w ( t ) / π )
where (xgoal, ygoal, zgoal) is the coordinate of the goal point and (thpitch, thyaw) is the attitude angle of the agent.

3.3.4. Distance-to-the-Obstacle Term

This term prizes that the agent travels far from the obstacles. The distance to obstacles includes the distance to static obstacles and the predicted distance to moving obstacles. If the value of this term is small, the agent will be close to obstacles. In contrast, if the value of this term is big, the agent will be far from obstacles.
d i s t o b s t a c l e ( i ) = n o r m ( x ( t ) x o b s t a c l e ( t ) , y ( t ) y o b s t a c l e ( t ) , z ( t ) z o b s t a c l e ( t ) ) r 0
d i s t o b s t a c l e = d i s t s t a t i c + d i s t d y n a m i c r 0
where (xobstacle, yobstacle, zobstaclel) are the coordinates of obstacles. Distance to obstacles includes distance to static obstacles (diststatic) and distance to dynamic obstacles (distdynamic).
For the dynamic obstacle, the position and velocity direction of the dynamic obstacle are measured by the sensor in real time, and then the position that the dynamic obstacle can reach in each time interval (Δt) in the next prediction time is calculated as the predicted position of the dynamic obstacle. Then, among these positions, the shortest distance to the agent is used as the dynamic obstacle distance (distdynamic), as shown in Figure 7. By considering this distance, the agent plans the next motion command. In actual operation, the polynomial fitting algorithm [19] and extended Kalman filtering method [33] can be used to predict the trajectory of obstacles. In addition, choosing the shortest distance as the distance of the dynamic obstacles increases the threat posed by the dynamic obstacle to the agent. Therefore, fuzzy rules can increase the weight of distance to the obstacle item and the agent can avoid obstacle as quickly as possible.
d i s t d y n a m i c = min n o r m x ( t ) x p r e o b ( t i ) , y ( t ) y p r e o b ( t i ) , z ( t ) z p r e o b ( t i )
where (xpreob, ypreob, zpreob) are the predicted position of the dynamic obstacle.
d i s t o b s t a c l e ( i ) = d i s t o b s t a c l e ( i ) i = 1 n d i s t o b s t a c l e ( i )
The reward value of the iteration is calculated using Equation (29). Among all the speed groups being evaluated, the largest speed pair among the evaluation function values is selected as the motion instruction.
f ( v t , ω t ) = ε s p e e d s p e e d ( v t , ω t ) + ε g o a l d i s t g o a l ( v t , ω t ) + ε p i t c h h e a d i n g h e a d i n g p i t c h ( v t , ω t ) + ε y a w h e a d i n g h e a d i n g y a w ( v t , ω t ) + ε o b s t a c l e d i s t o b s t a c l e ( i )
where εspeed is the reward value of speed, εgoal is the reward value of the distance from the candidate points to the goal. εobstacle is the reward value of the distance from the agent to the obstacles, and (εpitchheading, εyawheading) is the heading reward value.

3.4. The Fuzzy Rule for the DF_DWA

According to the evaluation function (26), the agent selects the best speed group in the predicted trajectory as the action command. The reward value in the evaluation function determines the weight of each evaluation term in the process of predicting the trajectory. Therefore, with different weight settings, the selected trajectories are also different. In the next paragraphs, we will discuss the impact of the weight settings on trajectory generation and obstacle avoidance, in which each reward value of the evaluation function will be set to zero or predominant parameter.
To test the influence of the reward value weight in the evaluation function on the trajectory generation, we set scenarios as shown in Figure 8 and Figure 9, and the test data is given in Table 1. The starting point of the agent is (0,0,0) and the goal point of the agent is (4,4,4). α is the number of iterations in each case. υ ¯ and ω ¯ are the average linear velocity and the average angular velocity of the agent, respectively. Meanwhile, the variance of linear velocity (σv2) and angular velocity (σω2) is used to evaluate the smoothness of the trajectory. In the test, the speed of the agent is limited to 2 m/s (linear velocity) and 60 degree/s (angular velocity). The acceleration is limited to 0.5 m/s2 (linear velocity) and 90 degree/s2 (angular velocity).
It can be seen from Figure 8 that when εobstalce is set as the predominant weight, the planned trajectory can avoid obstacles. If other terms are set as the predominant weight, the trajectory can cross through obstacles, resulting in failure of obstacle avoidance. Meanwhile, Figure 9 shows that if the value of εobstalce is set to zero, the trajectory cannot avoid obstacles. When εspeed is set as the predominant weight, it can be observed from Table 1 that the number of iterations is the least. If εspeed is set to zero, although the agent can safely reach the target position, the number of iterations doubles. It can be observed from Figure 8 and Figure 9 that the same situation occurs when εheading is set to zero or εgoal is set as the predominant weight. In other words, the trajectory reaches the goal after a full circle, which increases the number of iterations and causes the trajectory to be not smooth enough. It can also be analyzed from Figure 9 that when there is no εpitchheading, the trajectory has a horizontal rotation, and when εyawheading is absent, there is a vertical rotation. Therefore, the weight of these two evaluation items should not deviate too much when being set. In addition, Figure 9a shows that if εgoal is zero, the agent will fall into local minima.
The above analysis shows that, in the evaluation function, a single or constant combination of reward values cannot guarantee the safety of the agent’s trajectory. When the environment changes, the agent cannot make changes to adapt to the environment. Therefore, based on the traditional DWA, this paper introduces fuzzy rules to adjust the reward value.
When setting a fuzzy rule to adjust the weights of parameters, the distance to the obstacle and the distance to the target are used as the inputs of the fuzzy rule, and the evaluation coefficients of the speed, distance to the goal, distance to the obstacle, and the direction to the goal are used as outputs. The fuzzy rule is shown in Figure 10 and Figure 11.
(1)
When the distance to the goal and the distance to the obstacles are long (i.e., longer than 5 agent radii), the agent does not need to avoid obstacles first. The agent should accelerate to a certain speed and approach the goal with moderate attention. εobstalce can therefore be set at a small value. The value of εgoal can be increased moderately. εheading can be set at a large value.
(2)
When the distance to the goal is long and the distance to obstacles is short (i.e., shorter than 2 agent radii), the agent should avoid obstacles first and not rush to approach the goal. The weight of the agent speed should be reduced to guarantee that the agent has enough time to adjust the forward direction. Therefore, εobstalce should be set at a large value while εgoal, εheading, and εspeed should be set at a small value.
(3)
When the distance to the goal is short and the distance to obstacles is long, the agent should approach the goal first and not rush to avoid obstacles. The weight of the agent speed should be appropriately reduced to ensure that it reaches the goal safely. εobstalce and εspeed can therefore be set at a small value, while εgoal being set as the predominant weight and εheading being set at a medium value.
(4)
When the distance to the goal and the distance to obstacles are short, the agent should avoid obstacles first and approach the goal with moderate attention. The weight of the agent’s speed should be reduced. εspeed and εheading should therefore be set at a small value. εobstalce should be set at the predominant weight. εgoal should be set at a medium value.
The fuzzy rules Distgoal, Diatobstacle and εgoal, εobstalce, εspeed, εheading are shown in Table 2, Table 3, Table 4 and Table 5.
The Mamdani fuzzy inference algorithm is used while defuzzification adopts the center-of-gravity method. The evaluation function can be obtained:
f ( v t , ω t ) = ε s p e e d s p e e d ( v t , ω t ) + ε g o a l d i s t g o a l ( v t , ω t ) + ε p i t c h h e a d i n g h e a d i n g p i t c h ( v t , ω t ) + ε y a w h e a d i n g h e a d i n g y a w ( v t , ω t ) + ε o b s t a c l e d i s t o b s t a c l e ( i )
Based on fuzzy rules, if the minimum value of the input obstacle distance is set larger, the agent’s reaction time for obstacle avoidance will be earlier. Meanwhile, the minimum value of the εspeed and εheading fuzzy domain is not set to zero, which can prevent the agent from falling into local minima because of the slow velocity. The specific steps of DF_DWA algorithm are as follows.

4. Simulation Results

In this section, the simulation results will be discussed. The simulations were run with 64 bit MATLAB R2018a on the Intel Core I7-9750H, 2.6GHz processor. The parameters used in the numerical simulations are listed in Table 6.

4.1. Static Obstacle Avoidance

In this scenario, there are multiple static obstacles in the space. Meanwhile, an L-shaped obstacle with a height of 2.4 m and a length of 1.8m is added to the obstacles. We have discussed the performance of the planned trajectory when the reward value takes different weights in the second section. In this test, the fuzzy logic is introduced to adjust the weight of the parameters.
Figure 12a shows the trajectories planned by the three methods. Figure 12b shows the velocity change of the agent. Table 7 shows the number of iterations and the minimum distance between the agent and obstacles. In terms of the time spent in planning the trajectory, it can be seen from Figure 12b and Table 7 that it takes the most time when using F_DWA to plan trajectory, and DWA spends the least time in planning trajectory. Regarding the safety factor, it can be seen that the trajectory planned by DF_DWA is the farthest from obstacles, so the trajectory is the safest. The minimum distance between the trajectory planned by DWA and obstacles is 0.28mm, less than the safety distance of 0.3 mm, meaning that although agents can reach the goal with the least time when following the DWA planned trajectory, the safety is lowered. F_DWA takes the most time to plan the trajectory because of the velocity falling into the local minima many times. Therefore, DF_DWA is superior to F_DWA and DWA in terms of computational efficiency and safety.
It can be seen from the simulation results that compared with the algorithms DW, BUG, and PF in the literature [19], DF_DWA can ensure that the agent’s trajectory is safer when escaping the local minima. Compared with the hybrid dynamic window method [18], DF_DWA can make agent keep a safer distance from obstacles when escaping U-shaped traps. Compared with DW4DO [15], DF_WDA can adjust the parameter weight of the optimization function according to the changes in the obstacle scene.

4.2. Dynamic Obstacle Avoidance

In space activities, multiple agents are sometimes required to complete tasks at the same time. For one agent, the other agents are regarded as dynamic obstacles. Each agent is equipped with sensors that can perceive the external environment around it. Therefore, for abnormal situations, if an agent is chasing the agent ahead, it can be perceived by the agent ahead.
In this section, the agent can detect the location of other agents by sensors and calculate the distance between them [19,33,42]. When using DF_DWA, although the speed does not change much, it is not constant. Therefore, the safety distance between obstacles must be strictly ensured. In this paper, the current maximum window speed is used to calculate the braking distance (i.e., Rbraking).
(1) Two agents
To further test the obstacle avoidance ability of the algorithm in a dynamic environment, we set up a scenario in which two agents move to each other and are obstacles to each other. As shown in Figure 13a, the starting point of Agent 1 is (0,0,3). The goal point of Agent 1 is (8,0.5,5). The starting point of Agent 2 is (8,0,3). The goal point of Agent 2 is (0,0.7,5). It is observed from the test result that this situation is safe in that Agent 1 moves upward and Agent 2 moves downward when the separation of the two agents is short. The black dotted line is the predicted trajectory of Agent 2 to Agent 1.
It can be seen from Figure 13b that the parameter εspeed is set as a predominant weight when the two agents are far apart initially. As the two agents approach each other, the weight of the εobstalce gradually increases. When the agents reach the obstacle avoidance range, the weight of the εobstalce exceeds 0.9. After escaping the threat posed by one another, the first task of each agent is to reach its goal. Therefore, εgoal gradually increases. When the goal is about to be reached, the weight of the agent speed should be decreased to prevent the agent from passing over the goal while traveling at an extremely high speed.
(2) Four agents
In the previous section, when two agents move towards each other, each agent has sufficient space for movement. However, when multiple agents fly close to each other, the space to avoid obstacles will be shrunk. To verify the obstacle avoidance ability of the agent in this situation, two scenarios are set up in this section: one is that four agents fly to four locations within the same plane that are close to each other (FFLSP), and the other is that four agents cross each other and fly to the designated locations (FFCL).
In the FFLSP scenario, four agents fly to different positions at the same height to complete tasks similarly to formation or assembly. The starting point of Agent 1 is (0,2,0). The goal point of Agent 1 is (5,4,5). The starting point of Agent 2 is (0,0,0). The goal point of Agent 2 is (5,6,5). The starting point of Agent 3 is (4,2,0). The goal point of Agent 3 is (3,4,5). The starting point of Agent 4 is (4,0,0). The goal point of Agent 4 is (3,6,5). As shown in Figure 14a, the agents safely reach their goal positions. Meanwhile, Figure 14c shows the minimum distance between agents, that is, the smallest value in the set of distances between each agent and other agents at the same moment during the flight. It can be seen that the minimum distance of Agent 1 and Agent 2 is similar before the 95th iteration. This result shows that Agent 1 and Agent 2 are the main obstacles to each other. When running the 78th iteration, the minimum safety distance is 0.5324 m, which is greater than the safety distance of 0.3 m.
In the FFCL scenario, four agents fly from four diagonal directions. This setting is used to simulate multiple agents reaching their positions at the same time to carry out their operational tasks, respectively. During the process, the agents pass through similar path points. The four agents are unknown dynamic obstacles to each other. The starting position of Agent 1 are (7,1,1). The goal point of Agent 1 is (8,1,1). The starting position of Agent 2 is (0,2,0). The goal point of Agent 2 is (5,4,5). The starting position of Agent 3 is (0,7,4). The goal position of Agent 3 is (8,1,1). The starting position of Agent 4 is (6,5,4). The goal position of Agent 4 is (1,1,1). It can be seen from Figure 14b that the agents safely reach their goal points, respectively. Figure 14d shows the results of the minimum distance between agents at the same moment. At the 69th iteration, the minimum distance between Agent 1 and Agent 3 is 1.258 m, which is far greater than the safety distance of 0.3m.
The analysis of the minimum distance shows that the planned trajectory is safe and reliable. Figure 15 shows the weight of reward value changes in FFLSP and FFCL scenarios. In FLSP scenario, when an agent starts moving, the distance to its goal, and the distance to the obstacles are relatively long. Therefore, in Figure 12a, the weight of εgoal is set as the predominant weight at the beginning. As the agents approach the goal point, the weight of εobstacle of the Agent 1 and Agent 2 gradually increases before the 75th iteration, and the weight of εobstacle is set as the predominant weight until the 95th iteration. Corresponding to Figure 14c, the minimum obstacle distance of Agent 1 and Agent 2 is also in this iteration interval, so the agents avoid obstacles first. Then, when Agents 1 and 2 escape the collision threat, the weight of εobstacle gradually decreases, whereas the weight of εgoal gradually increases. For Agent 3, since the weight of εgoal is always set as the predominant weight, it is more likely that the Agent 3 will not be collided during operation. As for Agent 4, the weight of εobstacle gradually increases since the 59th iteration, and reaches the maximum at the 91st iteration. It is noticeable from Figure 14c that Agent 4 reaches the smallest collision position at the 91st iteration. The agent needs to avoid obstacles first, and then fly to the goal, so the weight of εobstacle is the maximum.
It can be seen from Figure 14d that as the minimum obstacle distance is far longer than the safe distance of 0.3m, the distance between the agents is safe. Therefore, in Figure 15b, the weight of the reward value slightly fluctuates. In the process of approaching the goal, the weight of the εspeed and εobstacle gradually decreases, while the weight of the εgoal gradually increases. When approaching the goal, the weight of the εspeed and εobstacle reaches the minimum, but the weight of the εgoal reaches the maximum. The analysis of the parameter changes shows that DF_DWA is effective, because the weight of the reward value can be adjusted with the change of the environment, thereby ensuring the safety of the agent’s trajectory. Compared with DW4DO [15], DF_DWA can ensure the smoothness of operation by moving obstacles in advance. Compared with ASTRO [33], it reduces the mathematical constraints between two motion agents, thus making it more conducive for multiple agents to operate together and complete space tasks through data sharing. Table 8 shows all the numerical results of the dynamic obstacle avoidance test. It is obvious that the planned trajectory is smooth, and it is suitable for application in the navigation framework.

5. Conclusions

This paper proposes a three-dimensional obstacle-avoidance approach for dynamic environments based on DWA. The weight of each parameter in the evaluation function adjusts in real-time in different working environments to ensure an agent selects a safe trajectory. By the simulation test, the approach responds well to large obstacles and dynamic obstacles and plans a safe trajectory. In addition, the numerical results show that the planned trajectory is smooth and is applicable in the navigation framework. Compared with other methods, the developed DF_DWA does not require complicated constraint formulas. Also, it reduces calculation requirements and increases the ability to respond to environmental changes. In future work, we will continue to study obstacle avoidance algorithms for dynamic obstacles and improve experimental conditions.

Author Contributions

Conceptualization, C.X.; methodology, C.X.; validation, C.X.; formal analysis, C.X.; investigation, C.X.; data curation, C.X.; writing—original draft preparation, C.X.; writing—review and editing, M.X.; funding acquisition, Z.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported in part by the National Science Foundation of China (No.11672290, No.11972343, No. 91848202).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

This work is supported in part by the National Science Foundation of China (No.11672290, No.11972343, No. 91848202).

Conflicts of Interest

No conflict of interest exists in the submission of this manuscript, and the manuscript is approved by all authors for publication.

References

  1. Rybus, T. Obstacle avoidance in space robotics: Review of major challenges and proposed solutions. Prog. Aerosp. Sci. 2018, 101, 31–48. [Google Scholar] [CrossRef]
  2. Seder, M.; Petrović, I. Dynamic window based approach to mobile robot motion control in the presence of moving obstacles. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 1986–1991. [Google Scholar] [CrossRef]
  3. Szczerba, R.J.; Galkowski, P.; Glickstein, I.S.; Ternullo, N. Robust algorithm for real-time route planning. IEEE Trans. Aerosp. Electron. Syst. 2000, 36, 869–878. [Google Scholar] [CrossRef] [Green Version]
  4. Stentz, A. Optimal and efficient path planning for partially-known environments. In Proceedings of the 1994 IEEE International Conference on Robotics and Automation, San Diego, CA, USA, 8–13 May 1994; pp. 3310–3317. [Google Scholar] [CrossRef]
  5. Stentz, A. The Focussed D* Algorithm for Real-Time Replanning. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, IJCAI’95, San Francisco, CA, USA, 1995; Volume 2, pp. 1652–1659. [Google Scholar]
  6. Bruce, J.; Veloso, M.M. Real-time randomized path planning for robot navigation. In Robot Soccer World Cup; Springer: Berlin/Heidelberg, Germany, 2003; Volume 2752, pp. 288–295. [Google Scholar] [CrossRef]
  7. Lumelsky, V.J.; Stepanov, A.A. Path-planning strategies for a point mobile automaton moving amidst unknown obstacles of arbitrary shape. Algorithmica 1987, 2, 403–430. [Google Scholar] [CrossRef] [Green Version]
  8. Kamon, I.; Rivlin, E.; Rimon, E. New range-sensor based globally convergent navigation algorithm for mobile robots. In Proceedings of the IEEE International Conference on Robotics and Automation, Minneapolis, MN, USA, 22–28 April 1996; Volume 1, pp. 429–435. [Google Scholar] [CrossRef]
  9. Borenstein, J.; Koren, Y. The vector field histogram—Fast obstacle avoidance for mobile robots. IEEE Trans. Robot. Autom. 1991, 7, 278–288. [Google Scholar] [CrossRef] [Green Version]
  10. Ulrich, I.; Borenstein, J. VFH+: Reliable obstacle avoidance for fast mobile robots. In Proceedings of the 1998 IEEE International Conference on Robotics and Automation (Cat. No.98CH36146), Leuven, Belgium, 20–20 May 1998; Volume 2, pp. 1572–1577. [Google Scholar] [CrossRef]
  11. Ulrich, I.; Borenstein, J. VFH*: Local obstacle avoidance with look-ahead verification. In Proceedings of the 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), San Francisco, CA, USA, 24–28 April 2000; Volume 3, pp. 2505–2511. [Google Scholar] [CrossRef]
  12. Fox, D.; Burgard, W.; Thrun, S. The dynamic window approach to collision avoidance. IEEE Robotics Autom. Mag. 1997, 4, 23–33. [Google Scholar] [CrossRef] [Green Version]
  13. Ozdemir, A.; Sezer, V. A hybrid obstacle avoidance method: Follow the gap with dynamic window approach. In Proceedings of the 2017 First IEEE International Conference on Robotic Computing (IRC), Taichung, Taiwan, 10–12 April 2017; pp. 257–262. [Google Scholar] [CrossRef]
  14. Simmons, R. The Curvature-velocity method for local obstacle avoidance. In Proceedings of the IEEE International Conference on Robotics and Automation, Minneapolis, MN, USA, 22–28 April 1996; Volume 4, pp. 3375–3382. [Google Scholar] [CrossRef]
  15. Molinos, E.J.; Llamazares, Á.; Ocaña, M. Dynamic window based approaches for avoiding obstacles in moving. Rob. Auton. Syst. 2019, 118, 112–130. [Google Scholar] [CrossRef]
  16. Nak, Y.K.; Simmons, R.G. The lane-curvature method for local obstacle avoidance. In Proceedings of the 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems. Innovations in Theory, Practice and Applications (Cat. No.98CH36190), Victoria, BC, Canada, 17 October 2002; pp. 1615–1621. [Google Scholar] [CrossRef] [Green Version]
  17. Fernández, J.L.; Sanz, R.; Benayas, J.A.; Diéguez, A.R. Improving collision avoidance for mobile robots in partially known environments: The beam curvature method. Rob. Auton. Syst. 2004, 46, 205–219. [Google Scholar] [CrossRef]
  18. Moon, J.; Lee, B.Y.; Tahk, M.J. A hybrid dynamic window approach for collision avoidance of VTOL UAVs. Int. J. Aeronaut. Sp. Sci. 2018, 19, 889–903. [Google Scholar] [CrossRef]
  19. Xinyi, Y.; Yichen, Z.; Liang, L.; Linlin, O. Dynamic window with virtual goal (DW-VG): A new reactive obstacle avoidance approach based on motion prediction. Robotica 2019, 37, 1438–1456. [Google Scholar] [CrossRef]
  20. Balan, K.; Manuel, M.P.; Faied, M.; Krishnan, M.; Santora, M. A fuzzy based accessibility model for disaster environment. In Proceedings of the IEEE International Conference on Robotics and Automation, Montreal, QC, Canada, 20–24 May 2019; pp. 2304–2310. [Google Scholar] [CrossRef]
  21. Mbede, J.B.; Melingui, A.; Zobo, B.E.; Merzouki, R.; Bouamama, B.O. zSlices based type-2 fuzzy motion control for autonomous robotino mobile robot. In Proceedings of the 2012 IEEE/ASME 8th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications, Suzhou, China, 8–10 July 2012; pp. 63–68. [Google Scholar] [CrossRef]
  22. Mohanty, P.K.; Parhi, D.R. Path generation and obstacle avoidance of an autonomous mobile robot using intelligent hybrid controller. In International Conference on Swarm, Evolutionary, and Memetic Computing, Proceedings of the International Conference on Swarm, Evolutionary, and Memetic Computing, Maribor, Slovenia, 10–12 July 20109; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7677, pp. 240–247. [Google Scholar] [CrossRef]
  23. Tarokh, M. Hybrid intelligent path planning for articulated rovers in rough terrain. Fuzzy Sets Syst. 2008, 159, 2927–2937. [Google Scholar] [CrossRef]
  24. Chen, L.; Hu, X.; Tang, B.; Cheng, Y. Conditional DQN-based motion planning with fuzzy logic for autonomous driving. IEEE Trans. Intell. Transp. Syst. 2020, 1–12. [Google Scholar] [CrossRef]
  25. Upadhyay, S.; Ratnoo, A. Smooth path planning for unmanned aerial vehicles with airspace restrictions. J. Guid. Control. Dyn. 2017, 40, 1596–1612. [Google Scholar] [CrossRef]
  26. Mattei, M.; Blasi, L. Smooth flight trajectory planning in the presence of no-fly zones and obstacles. J. Guid. Control. Dyn. 2010, 33, 454–462. [Google Scholar] [CrossRef]
  27. Marchidan, A.; Bakolas, E. Collision avoidance for an unmanned aerial vehicle in the presence of static and moving obstacles. J. Guid. Control. Dyn. 2020, 43, 96–110. [Google Scholar] [CrossRef]
  28. Hu, C.; Zhang, Z.; Yang, N.; Shin, H.S.; Tsourdos, A. Fuzzy multiobjective cooperative surveillance of multiple UAVs based on distributed predictive control for unknown ground moving target in urban environment. Aerosp. Sci. Technol. 2019, 84, 329–338. [Google Scholar] [CrossRef]
  29. Allen, R.E. A real-time framework for kinodynamic planning in dynamic environments with application to quadrotor obstacle avoidance. Rob. Auton. Syst. 2019, 115, 174–193. [Google Scholar] [CrossRef]
  30. Zhen, Z.; Chen, Y.; Wen, L.; Han, B. An intelligent cooperative mission planning scheme of UAV swarm in uncertain dynamic environment. Aerosp. Sci. Technol. 2020, 100, 105826. [Google Scholar] [CrossRef]
  31. Zhang, N.; Gai, W.; Zhong, M.; Zhang, J. A fast finite-time convergent guidance law with nonlinear disturbance observer for unmanned aerial vehicles collision avoidance. Aerosp. Sci. Technol. 2019, 86, 204–214. [Google Scholar] [CrossRef]
  32. Rastgoftar, H.; Atkins, E.M. Safe multi-cluster UAV continuum deformation coordination. Aerosp. Sci. Technol. 2019, 91, 640–655. [Google Scholar] [CrossRef]
  33. Chamitoff, G.E.; Saenz-Otero, A.; Katz, J.G.; Ulrich, S.; Morrell, B.J.; Gibbens, P.W. Real-time maneuver optimization of space-based robots in a dynamic environment: Theory and on-orbit experiments. Acta Astronaut. 2018, 142, 170–183. [Google Scholar] [CrossRef]
  34. Hong, Z.; Chun-Long, S.; Zi-Jun, Z.; Wei, A.; De-Qiang, Z.; Jing-Jing, W. A modified dynamic window approach to obstacle avoidance combined with fuzzy logic. In Proceedings of the 2015 14th International Symposium on Distributed Computing and Applications for Business Engineering and Science (DCABES), Guiyang, China, 18–24 August 2015; pp. 523–526. [Google Scholar] [CrossRef]
  35. Abubakr, O.A.; Jaradat, M.A.K.; Hafez, M.A. A reduced cascaded fuzzy logic controller for dynamic window weights optimization. In Proceedings of the 2018 11th International Symposium on Mechatronics and its Applications (ISMA), Sharjah, UAE, 4–6 March 2018; pp. 1–4. [Google Scholar] [CrossRef]
  36. Faisal, M.; Hedjar, R.; Al Sulaiman, M.; Al-Mutib, K. Fuzzy logic navigation and obstacle avoidance by a mobile robot in an unknown dynamic environment. Int. J. Adv. Robot. Syst. 2013, 10. [Google Scholar] [CrossRef]
  37. Zavlangas, P.G.; Tzafestas, S.G. Industrial robot navigation and obstacle avoidance employing fuzzy logic. J. Intell. Robot. Syst. Theory Appl. 2000, 27, 85–97. [Google Scholar] [CrossRef]
  38. Morrell, B.J.; Chamitoff, E.; Gibbens, P.W. Autonomous operation of multiple free-flying robots on the international space station. In Proceedings of the 25th AAS/AIAA Spaceflight Mechanics Conference, Williamsburg, VA, USA, 11–15 January 2015. [Google Scholar]
  39. Gao, Q.; Liu, J.; Tian, T.; Li, Y. Free-flying dynamics and control of an astronaut assistant robot based on fuzzy sliding mode algorithm. Acta Astronaut. 2017, 138, 462–474. [Google Scholar] [CrossRef]
  40. Yang, H.I.; Zhao, Y.J. Trajectory planning for autonomous aerospace vehicles amid known obstacles and conflicts. J. Guid. Control. Dyn. 2004, 27, 997–1008. [Google Scholar] [CrossRef]
  41. Park, J.; Baek, H. Stereo vision based obstacle collision avoidance for a quadrotor using ellipsoidal bounding box and hierarchical clustering. Aerosp. Sci. Technol. 2020, 103, 105882. [Google Scholar] [CrossRef]
  42. Llamazares, Á.; Ivan, V.; Molinos, E.; Ocaña, M.; Vijayakumar, S. Dynamic obstacle avoidance using Bayesian occupancy filter and approximate inference. Sensors 2013, 13, 2929–2944. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Agent coordinate system.
Figure 1. Agent coordinate system.
Applsci 11 00504 g001
Figure 2. The DF_DWA flowchart.
Figure 2. The DF_DWA flowchart.
Applsci 11 00504 g002
Figure 3. Two-level (a) velocity space and (b) Cartesian space.
Figure 3. Two-level (a) velocity space and (b) Cartesian space.
Applsci 11 00504 g003
Figure 4. (a) Local goals in trajectory planning and (b) the position of the local goals.
Figure 4. (a) Local goals in trajectory planning and (b) the position of the local goals.
Applsci 11 00504 g004
Figure 5. The selection of the candidate local goals.
Figure 5. The selection of the candidate local goals.
Applsci 11 00504 g005
Figure 6. Angle θerror and ϕerror to the goal.
Figure 6. Angle θerror and ϕerror to the goal.
Applsci 11 00504 g006
Figure 7. Perception prediction planning architecture.
Figure 7. Perception prediction planning architecture.
Applsci 11 00504 g007
Figure 8. The path planning results with each reward value of the evaluation function being set to predominant weight: (a) front view (b) side view.
Figure 8. The path planning results with each reward value of the evaluation function being set to predominant weight: (a) front view (b) side view.
Applsci 11 00504 g008aApplsci 11 00504 g008b
Figure 9. The path planning results with each reward value of the evaluation function being set to zero: (a) front view (b) side view.
Figure 9. The path planning results with each reward value of the evaluation function being set to zero: (a) front view (b) side view.
Applsci 11 00504 g009aApplsci 11 00504 g009b
Figure 10. Fuzzy input membership functions.
Figure 10. Fuzzy input membership functions.
Applsci 11 00504 g010
Figure 11. Fuzzy output membership functions.
Figure 11. Fuzzy output membership functions.
Applsci 11 00504 g011
Figure 12. (a) planning trajectory and (b) velocity in the static obstacle avoidance scenario.
Figure 12. (a) planning trajectory and (b) velocity in the static obstacle avoidance scenario.
Applsci 11 00504 g012
Figure 13. (a) planning trajectory and (b) the changes in the weight of reward value during iterations in two dynamic obstacle avoidance scenarios.
Figure 13. (a) planning trajectory and (b) the changes in the weight of reward value during iterations in two dynamic obstacle avoidance scenarios.
Applsci 11 00504 g013
Figure 14. Dynamic obstacle avoidance: ((a) four agents flying to the same plane; (b) four agents crossing each other; (c) the minimum obstacle distance between the agents at the same moment during the iterations in FFLSP scenario; (d) the minimum obstacle distance between the agents at the same moment during the iterations in FFCL scenario).
Figure 14. Dynamic obstacle avoidance: ((a) four agents flying to the same plane; (b) four agents crossing each other; (c) the minimum obstacle distance between the agents at the same moment during the iterations in FFLSP scenario; (d) the minimum obstacle distance between the agents at the same moment during the iterations in FFCL scenario).
Applsci 11 00504 g014
Figure 15. The changes in the weight of the reward value during iterations in (a) FFLSP and (b) FFCL scenarios.
Figure 15. The changes in the weight of the reward value during iterations in (a) FFLSP and (b) FFCL scenarios.
Applsci 11 00504 g015
Table 1. The results of the parameter test.
Table 1. The results of the parameter test.
Configurationα υ ¯ ω ¯ p i t c h ω ¯ y a w σ ν 2 σ ω p i t h c h 2 σ ω y a w 2
No εpitchheading2820.72040.0034−0.3860.00910.002902630
Predominant εpitchheading1970.36210.00390.04950.0560.02310.0040
No εyawheading2190.7165−0.15710.05150.01070.23230.0048
Predominant εyawheading4620.160.0060.02440.02580.01230.0073
No εgoal1000.03180.06020.07797.12 × 10−40.0030.0047
Predominant εgoal3540.64290.02820.44080.02820.02400.1436
No εobstacle2340.30380.01420.04170.03720.01470.0048
Predominant εobstacle1870.20550.00950.04150.04380.01090.0093
No εspeed3400.21390.00970.04290.04460.01140.0091
Predominant εspeed1690.41950.02930.08170.07230.01780.0039
Table 2. Fuzzy control rules of Distgoal, Distobstacle, and εgoal.
Table 2. Fuzzy control rules of Distgoal, Distobstacle, and εgoal.
Distobstacle
DistgoalZSPSPMPBPH
ZPMPMPMPBPH
PSPSPSPMPBPH
PMZPSPMPBPB
PBZZPSPMPB
PHZZZPSPM
Table 3. Fuzzy control rules of Distgoal, Distobstacle, and εobstacle.
Table 3. Fuzzy control rules of Distgoal, Distobstacle, and εobstacle.
Distobstacle
DistgoalZSPSPMPBPH
ZPBPBPMPSZ
PSPHPBPMPSZ
PMPHPBPMPSZ
PBPHPBPMPSPS
PHPHPHPBPMPS
Table 4. Fuzzy control rules of Distgoal, Distobstacle, and εspeed.
Table 4. Fuzzy control rules of Distgoal, Distobstacle, and εspeed.
Distobstacle
DistgoalZSPSPMPBPH
ZZSZSPSPMPB
PSZSZSPSPMPB
PMZSZSPSPMPB
PBZSPSPMPMPB
PHZSPSPMPBPH
Table 5. Fuzzy control rules of Distgoal, Distobstacle, and εyawheading.
Table 5. Fuzzy control rules of Distgoal, Distobstacle, and εyawheading.
Distobstacle
DistgoalZSPSPMPBPH
ZZZZPSPS
PSZZPSPSPS
PMPSPSPSPSPS
PBPSPSPMPMPB
PHPSPSPMPBPB
where Z is zero; ZS is the zero small; PS is small; PM is the middle; PB is big; PH is the height big.
Table 6. Agent and obstacles parameters.
Table 6. Agent and obstacles parameters.
ParametersValue
The limited linear velocity (m/s)2
The minimum linear velocity (m/s)0
The limited angular velocity (degree/s)60
The limited linear acceleration (m/s2)0.5
The limited angular acceleration (degree/s2)90
The radius of the agent Ragent (m)0.15
The radius of the obstacles Robstacle (m)0.15
The safe distance (m)0.3
The allowable position error: p_e (m)0.1
Prediction time (s)3
Table 7. The iteration time and minimum distance.
Table 7. The iteration time and minimum distance.
MethodIteration TimeMinimum Distance (mm)
DWA1790.28
F_DWA22300.35
DF_DWA2890.4
Table 8. Dynamic obstacle avoidance: numerical results.
Table 8. Dynamic obstacle avoidance: numerical results.
ExperimentAgentsα υ ¯ ω ¯ p i t c h ω ¯ y a w σ ν 2 σ ω p i t h c h 2 σ ω y a w 2
Two agentsAgent12220.40450.0223−0.0660.05350.01360.0251
Agent22220.4824−0.05160.00855.37 × 10−40.00340.0487
FFLSPAgent11330.54760.15480.03950.00290.10590.033
Agent21480.64630.07090.08280.00730.11520.021
Agent31290.50070.12880.27640.05780.10030.0099
Agent41370.66860.18020.16840.02690.09240.1031
FFCLAgent11320.5214−0.01140.18710.06150.09370.0373
Agent21730.67890.01390.17990.04110.12458.8 × 10−4
Agent31590.6081−0.0230.05340.00360.13880.0012
Agent41430.6−0.1670−0.2050.04390.10850.0746
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xu, C.; Xu, Z.; Xia, M. Obstacle Avoidance in a Three-Dimensional Dynamic Environment Based on Fuzzy Dynamic Windows. Appl. Sci. 2021, 11, 504. https://doi.org/10.3390/app11020504

AMA Style

Xu C, Xu Z, Xia M. Obstacle Avoidance in a Three-Dimensional Dynamic Environment Based on Fuzzy Dynamic Windows. Applied Sciences. 2021; 11(2):504. https://doi.org/10.3390/app11020504

Chicago/Turabian Style

Xu, Ce, Zhenbang Xu, and Mingyi Xia. 2021. "Obstacle Avoidance in a Three-Dimensional Dynamic Environment Based on Fuzzy Dynamic Windows" Applied Sciences 11, no. 2: 504. https://doi.org/10.3390/app11020504

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop