1. Introduction
This article addresses the path planing and control of a robot in a large building with one or more elevators. Suppose that the robot is ordered to visit another place (goal) in a large building. Once a goal point in the building is set, then the robot needs to move until it reaches the goal point. This scenario is important for the operation of many indoor robots, such as home robots [
1,
2], service robots [
3,
4] or nursing robots [
5,
6].
There may be a case where the goal is at a floor which is different from the floor where the robot is located. In order to handle this case, the robot needs to take an elevator. Humans effortlessly take an elevator, since elevators are designed for easy use by human operators. However, programming a robot to accomplish this task is surprisingly challenging [
7].
Many papers [
7,
8,
9] have addressed how to make a robot use an elevator. It is assumed that the robot can utilize elevators to move from one floor to another floor using the vision-based manipulating approach in [
7,
8]. How to make the robot operate the elevator buttons is not within the scope of our paper.
This paper considers the case where the robot moves in a known building environment. Therefore, the robot can locate itself using Monte–Carlo localization [
10] or RFID tags [
11,
12]. By deploying multiple RFID tags in the building, one can localize the robot with an RFID antenna and RFID reader [
12]. The references [
11,
12] showed that localization systems using RFID tags can achieve high estimation accuracy in real indoor scenarios. How to localize the robot in a known building is not within the scope of our paper.
In [
13], the authors addressed how to produce a graph-based indoor network including floor-level and non-level paths from Industry Foundation Classes (IFC)-based building models. Ref. [
13] required only the geometric information of IFC data models to automatically identify indoor space boundaries as well as to produce six categories of indoor paths. However, Ref. [
13] did not address tracking controls of a robot based on the graph-based indoor network.
A Voronoi graph has been widely used in robotics [
14,
15,
16,
17,
18,
19,
20]. A 2D Voronoi graph is a partition of the plane into regions close to each of a given set of sites. In the Voronoi graph, a Voronoi edge is the set of points which are equidistant from the two closest obstacles. According to this definition, a Voronoi graph provides a connected curve, which is safe and suitable for a robot path. In our paper, our robot uses a Voronoi graph as its path, since the graph provides the safest path which is the farthest from all obstacles.
For planning the robot’s path inside a large building with one or more elevators, this article addresses a topological map, called the building Voronoi graph. The building Voronoi graph consists of Voronoi vertices and Voronoi edges. The building Voronoi graph is generated as a contraction of a large building map, which is assumed to be known a priori.
In the building Voronoi graph, each floor is modeled utilizing the 2D Voronoi graph, and utilizing elevators is modeled as edges connecting distinct 2D Voronoi graphs. One sets the weight of an edge in the graph, based on the travel time inside an elevator and the traversal time of a passage. Using the building Voronoi graph, the robot can obtain a safe path to the goal.
Since the building Voronoi graph is a contraction of the entire building map (e.g, grid map), one can reduce the map size required for building exploration. Using the building Voronoi graph, one only needs to calculate the shortest path along the Voronoi graph, instead of the entire grid map. Therefore, one can improve the path-finding efficiency considerably, compared to the case where one searches for the shortest path utilizing all grid cells in the grid map.
In practice, the robot needs to use an elevator while moving to reach the goal. Moreover, the robot needs to spend some time while staying inside an elevator. One thus sets the weight of a Voronoi edge based on the travel time in an elevator and the traversal time of a passage. Using the weighted Voronoi graph, the robot finds the shortest path to the goal and follows the path.
This paper considers a scenario where the robot detects and identifies an object utilizing on-board sensors. For instance, the robot can detect a nearby object utilizing its camera or Lidar. For object identification in the camera image, one can utilize real-time object identification methods, such as you only look once (YOLO) [
21,
22,
23] or Fast R-CNN [
24]. For real-time object identification using Lidar, one can apply classification approaches in [
25,
26]. The classifier in [
25,
26] classifies objects into car or pedestrian. How to identify objects using on-board sensors is not within the scope of our paper. If the robot detects an object (e.g., a human) while following a Voronoi edge, then it avoids the object utilizing reactive control laws.
There are many papers on collision-avoidance control laws in 2D environments [
27,
28,
29]. The authors of [
27] addressed the dynamic window approach for collision avoidance. collision-avoidance controls in [
27,
28,
29] did not consider the velocity of a moving object. In other words, collision-avoidance controls in [
27,
28,
29] designed the robot’s path assuming that all obstacles are static. Hence, collision-avoidance controls in [
27,
28,
29] may not be suitable for avoiding a fast moving object. In our paper, we develop a reactive collision-avoidance control considering the velocity of a moving object explicitly. This distinguishes our paper from other papers which did not consider moving obstacles.
For collision avoidance in the literature, obstacles were simplified as spheres or bounding boxes in [
30,
31,
32,
33]. However, simplifying an object shape is not desirable, especially in the case where an object has arbitrary shapes. Our article develops reactive collision-avoidance controls which do not simplify an object shape. As far as we know, our paper is novel in addressing reactive collision-avoidance controls, which can avoid objects with arbitrary shapes.
To the best of our knowledge, no paper in the literature handled how to make a robot follow the Voronoi graph while avoiding collision with other objects. Our paper integrates the proposed reactive collision-avoidance control with a path-tracking control, so that the robot can follow the Voronoi graph while avoiding collision with static map environments or (moving or static) objects with arbitrary shapes.
To the best of our knowledge, the novelties of our paper can be summarized as follows:
This paper addresses how to make the robot follow the building Voronoi graph, while avoiding collision with map environments as well as (moving or static) objects with arbitrary shapes.
This paper proposes a reactive collision-avoidance control, which achieves collision avoidance with map environments as well as (moving or static) objects with arbitrary shapes.
We verify the effectiveness of the proposed approaches utilizing MATLAB simulations.
This paper is organized as follows.
Section 2 reviews the literature on robot path planning.
Section 3 presents definitions and assumptions in this paper.
Section 4 presents the path planner utilizing the building Voronoi graph.
Section 5 introduces the reactive control for collision avoidance.
Section 6 discusses how to make the robot follow the Voronoi path while avoiding collision.
Section 7 provides MATLAB simulations to verify the effectiveness of the proposed approaches.
Section 8 provides conclusions.
2. Literature Review on Robot Path Planning
Various papers exist on the path planning of robots. 3D path planning was introduced in [
34], so that a robot can arrive at its goal as fast as possible, while not colliding with 3D obstacles in the environments. Many path-planning planning algorithms, such as RRT-star or RRT-star-smart [
35,
36,
37], used random sampling to generate a path in environments without grid cells. Three-dimensional path planning is distinct from path planning in a building structure with elevators, which is handled in our paper.
In the literature on robotics, the A-star algorithm, Theta-star algorithm, Dijkstra algorithm, and Voronoi graph were widely used for path planning of various robots [
38,
39,
40,
41]. The A-star algorithm shows better performance than the Dijkstra algorithm by utilizing heuristics to guide its search towards its goal. The Theta-star algorithm in [
38,
42] is a variant of the A-star algorithm, propagating information along grid edges without constraining the paths to the grid edges. A-star, Theta-star, or Djkstra algorithms [
39,
40,
41] require that the workspace is composed of multiple grid cells. In order to generate a Voronoi graph in grid maps, one can apply brushfire algorithms [
43], which build the distance map at each grid cell.
However, as the workspace size increases, the number of grid cells required increases. As the building size increases, the required grid map size increases as well. Therefore, computing a path utilizing A-star, Theta-star, and brushfire algorithms, or Djkstra algorithms in a large building with many floors, may not be feasible due to the large computational load. Therefore, we argue that grid maps are not desirable in the case where one needs to build a path in a large building with many floors.
This inspired us to build a Voronoi graph for representing a large building with many floors. In Robotics, a Voronoi graph has been used as a topological map of a workspace [
14,
15,
16,
17,
18,
19,
20]. In [
44], multiple robots generate a Voronoi graph as a topological map of the environment, while dropping indexed markers at Voronoi vertices. The references [
45,
46,
47] presented robot localization solutions as a robot explores a 2D environment by moving along Voronoi edges. In [
45,
46,
47], a Voronoi graph is generated by a robot, which moves along Voronoi edges based on range sensor measurements. However, due to sensing errors or obstacles in the environment, the robot with range sensors may have difficulty in generating an accurate Voronoi graph of the environment.
In our paper, the building Voronoi graph is built based on a large building map (e.g., grid map), which is assumed to be known a priori. Hence, our Voronoi graph is robust to sensing errors or obstacles in the environment. This distinguishes our paper from other papers [
45,
46,
47] on Voronoi graphs.
Voronoi path planning based on two steps was introduced in [
48]. In the first step, the safest areas in the environment are extracted by means of a Voronoi graph. In the second step, a fast marching approach was applied to the Voronoi extracted areas in order to obtain the shortest path. In [
49], the authors introduced an algorithm based on a Voronoi graph in order to compute an optimal path between a start and a goal in the presence of simple disjoint polygonal obstacles. However, in practice, obstacles can have various shapes, other than simple polygonal shapes. Our paper addresses the building Voronoi graph which is generated based on a large building map with arbitrary shapes.
In practice, the robot needs to use an elevator which yields the shortest path to the goal. Moreover, the robot needs to spend some time while staying inside an elevator. We thus set the weight of a Voronoi edge based on the travel time in an elevator and the traversal time of a passage.
We address path planning based on the building Voronoi graph, which represents a building structure with elevators. To the best of our knowledge, our paper is novel in the path planning and tracking control of a robot based on a building Voronoi graph. In addition, as far as we know, our paper is novel in addressing how to make the robot follow the Voronoi graph, while avoiding collision with other objects or static map environments. Moreover, our paper is unique in addressing reactive collision-avoidance controls, which can avoid objects with arbitrary shapes.
4. Path Planner Utilizing a Building Voronoi Graph
This section presents the path planner utilizing a building Voronoi graph. First, one presents how to generate for all . Each floor is represented as a plane with obstacles.
Let mapPoints denote the 2D points associated with static obstacle boundaries in a known 2D map. Since the robot is localized in a known map, it can derive the relative position of a mapPoint while it moves. The robot needs to avoid colliding with mapPoints while it moves. Moreover, the robot needs to avoid colliding with objects, such as humans, in the building environment.
At every floor, map boundaries are represented as mapPoints in the workspace. For instance, in a grid map, a mapPoint can be a grid cell presenting the map. MapPoints can be obtained utilizing the map of each floor in the building. It is assumed that mapPoints are known a priori.
Using mapPoints at the i-th floor (), we generate . Suppose that mapPoints exist at the i-th floor (). Let define the position of n-th mapPoint at the i-th floor. Here, .
Suppose that one can access the map of every floor, representing mapPoints on the associated plane. At the i-th floor, one generates a Voronoi graph as follows.
In MATLAB, the function returns the set of Voronoi edges associated with a given seed points (mapPoints in this paper). The Voronoi edges are all points in the plane, which are equidistant to the two nearest seed points (mapPoints in this paper). A Voronoi vertex is a point where more than two Voronoi edges meet.
In the case where mapPoints are generated from the sensing measurements of a robot, the mapPoints may form a cluster. In the case where mapPoints are clustered, one may generate a Voronoi vertex which exists inside the clustered mapPoints.
See
Figure 2 for an illustration. Five mapPoints exist in total. In this figure, the left three mapPoints are generated from the boundary of an obstacle, say
A. Furthermore, the right two mapPoints are generated from the boundary of another obstacle, say
B. In
Figure 2, a curve segment indicates the Voronoi edge generated utilizing mapPoints. Furthermore, a circle indicates a Voronoi vertex. At the center of three mapPoints on the left side, there exists a Voronoi vertex. A Voronoi edge leading to this vertex cannot be utilized as a path for a robot, since this vertex exists on an obstacle boundary. One thus removes Voronoi vertices which are too close to mapPoints. Then, Voronoi edges leading to the removed vertices also disappear. In
Figure 2, red bold curve segments indicate the remaining Voronoi edges, after removing Voronoi vertices which are too close to mapPoints.
For instance, we depict a map which is composed of mapPoints. In
Figure 3, mapPoints are plotted with yellow points. Blue curve segments in this figure indicate the trimmed Voronoi graph, which is obtained after removing vertices that are too close to mapPoints.
Figure 4 is the enlarged figure of
Figure 3. See that mapPoints are clustered around map boundaries. In simulations, we removed a Voronoi vertex, whose distance from a mapPoint is less than a certain distance, say
(5 m in MATLAB simulations).
Figure 3 shows that we still have Voronoi edges which are not necessary for robot path planning. In fact, we can trim a Voronoi edge which crosses the workspace boundary. In addition, we can trim a Voronoi vertex which exists outside the workspace. Furthermore, we can remove Voronoi edges which lead to workspace corners [
54]. The authors of [
54] considered the case where the robot builds a Voronoi diagram using range sensor measurements. In this case, edges leading to workspace corners can be easily detected using on-board sensors on the robot. However, in our paper, the Voronoi graph is built using the building map, not using range sensor measurements.
Trimming additional edges consumes the additional computational load. Considering computational load, we only removed a Voronoi vertex, whose distance from a mapPoint is less than .
It is acknowledged that a Voronoi graph does not provide the shortest path between two points. This article uses the Voronoi graph as the robot’s path, since it provides the safest path for the robot. Any Voronoi edge in the workspace can be used as the robot’s path segment.
At the i-th floor (), we generate a trimmed Voronoi graph and set it as the topological map . Each edge in the trimmed Voronoi graph is weighted using its edge length and its associated passage width.
Consider the trimmed Voronoi graph on the
i-th floor. Suppose that a Voronoi edge connects between two vertices, say
and
. We next present how to set the weight of this edge. Let
and
define the positions of
and
, respectively. The length of this edge is
. The traversal time of this edge is conjectured as
. Recall that the robot’s maximum speed is
, i.e.,
for all
k. Therefore, the weight of an edge
is set as
See that this weight is set considering the time required to traverse the edge, as the robot moves with its maximum speed .
We further generate a link edge connecting between distinct floors. Recall that a robot can move from one floor to another utilizing elevators. We generate a Voronoi vertex at a workspace position where an elevator exists.
Recall that a Voronoi edge is a link if . The weight of a link is set considering the travel time inside the associated elevator. Suppose that a link is . Then, we set the weight of this link as . Here, is a constant. The weight of a link is set, considering the fact that as the floor difference between two floors increases, it takes more time to make the elevator move between the two floors.
Once we obtain all weighted edges in the building Voronoi graph, we then generate the adjacency graph based on the weighted edges. We search for a vertex which is closest to the robot’s initial position, and set the vertex as . In addition, we search for a vertex which is closest to the goal point, and set the vertex as .
We obtain the shortest path from
to
utilizing the adjacency graph. For finding the shortest path in a graph, Dijkstra’s algorithms [
55] can be used. The building Voronoi graph is generated considering the entire building, including all elevators. Thus,
and
can exist on distinct floors. If necessary, the robot uses an elevator, which can lead to the goal as fast as possible.
Let define the sequence of Voronoi vertices along the shortest path from to . Here, , and . In total, Q vertices exist along this path.
Note that we obtain the shortest path along the building Voronoi graph, instead of the entire workspace (e.g., grid map). Therefore, one can improve the path-finding efficiency considerably, compared to the case where one searches for the shortest path utilizing all cells in the workspace. Since the building Voronoi graph is a contraction of the entire building map (e.g., grid map), one can reduce the map size required for building exploration.
Once the Voronoi path
is built, the robot follows the path until it reaches
. How to make the robot follow
is discussed in
Section 6.
5. Reactive Control for Collision Avoidance
While the robot moves along , it needs to avoid colliding with objects, such as humans. Suppose the robot detects an object’s surface utilizing local ranging sensors, such as Lidar. Let denote the maximum sensing range of the local sensors. Recall that the robot measures objectBoundaryPoints on the surface of an object with arbitrary shapes. One introduces a reactive collision-avoidance control, which makes the robot avoid colliding with every objectBoundaryPoint.
5.1. Collision Prediction within Sample-Steps in the Future
The robot’s velocity command at each sample-step is set for avoiding a collision with an object detected at the moment. In other words, the robot’s velocity command is set at every sample-step.
Suppose that the robot’s velocity vector at sample-step k is . We address how to evade the situation where the robot and an objectBoundaryPoint, say , collide between sample-step k and . Here, indicates the prediction window size, set by the robot’s operator.
denotes the velocity of at sample-step k. In the collision prediction, one assumes that maintains its velocity as between sample-step k and .
This assumption is required, since the robot at sample-step k needs to set its velocity vector, considering the predicted motion of an object. This assumption is trivial as we set . However, may lead to collision of the robot, since the prediction window size is too short.
Let
objectCircle define the circle with radius
, centered at an objectBoundaryPoint
. Here,
denotes the safety margin and is defined in (
3). Note that
is set considering
.
No collision happens between sample-step
k and
if
for all
. Rearranging (
5), we obtain
for all
. As one increases
u in (
6) from 0 to
,
generates the straight line segment
.
Utilizing (
6), Lemma 1 follows.
Lemma 1. The robot’s velocity vector is . Assume that an objectBoundaryPoint maintains its velocity as between sample-step k and . Assume that is outside the objectCircle centered at . Between sample-step k and , the robot does not collide with the objectBoundaryPoint, as long as does not cross the objectCircle centered at .
Under Lemma 1, the robot can anticipate a collision within sample-steps in the future. The robot examines if its velocity vector meets the following safety requirement: does not cross the objectCircle, centered at .
If does not cross the objectCircle centered at , then this implies that the robot’s velocity vector avoids collision within sample-steps in the future.
If does not cross the objectCircle centered at , then the robot utilizes as the velocity direction for a single sample duration. Note that the robot’s velocity vector is reset at every sample-step.
For collision avoidance between sample-step k and , Lemma 1 uses the following assumption: Between sample-step k and , an objectBoundaryPoint maintains its velocity as .
This assumption implies that an objectBoundaryPoint maintains its velocity as for seconds. As the sampling interval decreases, decreases. Thus, the above assumption becomes easier to be satisfied. Hence, it is desirable that is as small as possible for satisfying the assumption in Lemma 1.
5.2. Find a Safe Velocity Vector of the Robot
If crosses the objectCircle centered at , then the robot must calculate a safe velocity vector. Algorithm 1 is used to make the robot avoid colliding with objects as well as static map environments. Let denote the set of objectBoundaryPoints measured at sample-step k. Furthermore, let denote the set of mapPoints measured at sample-step k. Recall that mapPoints denote the 2D points associated with static obstacle boundaries in a known 2D map.
Let denote the union of and . Let indicate a point in .
As the safety requirement is not satisfied for at least one , the robot under Algorithm 1 calculates its velocity vector satisfying the safety requirement for every . Once a velocity direction which satisfies the safety requirement is found, then the robot utilizes the found velocity vector as its velocity for a single sample-step.
In Algorithm 1, returns the number of colliding with a candidate heading vector . For collision prediction within sample-steps in the future, we use the safety requirement. If , then Lemma 1 proved that moving with velocity assures collision with every within sample-steps in the future. Thus, if , then one can obtain a safe velocity vector avoiding collision with every . Therefore, one sets as the robot’s velocity vector at sample-step .
On the other hand, if , then one cannot derive a safe velocity vector which evades colliding with every within sample-steps in the future. In this situation, one finds a velocity vector with the minimum collision probability by finding a heading vector which collides with the minimum number of .
Algorithm 1 shows how to obtain a safe velocity vector at sample-step
. Let
denote a safe velocity vector at sample-step
. Let
define the robot’s heading angle at sample-step
k. Under (
2), one derives
Recall that the robot’s speed is limited by
. Therefore,
is limited as
Algorithm 1 finds a safe speed utilizing the following FOR loop: for , , while . Here, N is a tuning parameter indicating the search precision. As one increases N, one can increase the precision of the speed search. However, increasing N increases the computational load. In simulations, we use . This implies that the speed search is performed at distinct speeds.
In Algorithm 1, the FOR statement indicates that a velocity vector with low speed (
) is examined before examining a velocity vector with high speed (
). In this way, the robot does not accelerate abruptly. This is desirable considering the human safety in the building.
Algorithm 1 Find a safe velocity vector at sample-step |
- 1:
The current sample-step is k; - 2:
the set of objectBoundaryPoints measured at sample-step k; - 3:
the set of mapPoints measured at sample-step k; - 4:
; - 5:
if ==0 then - 6:
Set as the robot’s velocity vector at sample-step ; - 7:
Return , and this algorithm is finished; - 8:
end if - 9:
; - 10:
for , , while do - 11:
for , , while do - 12:
; - 13:
; - 14:
if ; then - 15:
Set as the robot’s velocity vector at sample-step ; - 16:
Return , and this algorithm is finished; - 17:
end if - 18:
if ; then - 19:
; ; ; - 20:
end if - 21:
end for - 22:
for , , while do - 23:
; - 24:
; - 25:
if ; then - 26:
Set as the robot’s velocity vector at sample-step ; - 27:
Return , and this algorithm is finished; - 28:
end if - 29:
if ; then - 30:
; ; ; - 31:
end if - 32:
end for - 33:
end for - 34:
; - 35:
Return ;
|
Algorithm 1 finds a safe heading angle, utilizing the following FOR loop: FOR , , while . This FOR loop implies that a safe heading angle is found in the clockwise direction starting from the robot’s heading. Once a safe heading angle is not derived using this clockwise direction search, then Algorithm 1 finds a safe heading angle utilizing the following FOR loop: FOR , , while . This FOR loop implies that a safe heading angle is found in the counter clockwise direction starting from the robot’s heading.
If a candidate heading vector and every satisfies the safety requirement, then the candidate heading vector is a safe velocity vector which evades collision. Therefore, one sets as the robot’s velocity vector at sample-step , and Algorithm1 is finished.
In Algorithm 2,
is set considering the computational load of the algorithm. By setting
as a small number, one does not have to examine the safety requirement for every
in
. In simulations, one uses
.
Algorithm 2 |
- 1:
; - 2:
for
do - 3:
if and do not satisfy the safety requirement; then - 4:
; - 5:
end if - 6:
if then - 7:
break; - 8:
end if - 9:
end for - 10:
Return ;
|
7. MATLAB Simulations
MATLAB simulations are presented to verify the effectiveness of the proposed approaches. The robot’s speed is 2 m/s. The sample duration is set as s, and is utilized in the reactive collision evasion control. One uses s as the constant associated with the maneuver of an elevator.
Recall that the robot is near-collision at the moment when the distance between the robot and any
is shorter than the safety margin
. In (
3), one sets
, and
. In MATLAB simulations, a true collision happens at the moment when the distance between the robot and any
is shorter than 1 m, which is shorter than the safety margin
. A MATLAB simulation ends when a true collision happens.
As the maximum sensing range of the robot, one sets m. One further simulates the measurement noise in the robot’s local range sensors. In Algorithm 1, every is perturbed by adding a Gaussian noise with mean 0 and standard deviation 0.1. In this way, one simulates the case where the robot’s local range sensors have some measurement noise.
In practice, the robot’s motion is perturbed due to environmental effects, such as wheel slippage. Therefore, instead of (
2), the robot’s motion model is set as
where
denotes the process noise at sample-step
k. It is assumed that
has a Gaussian distribution with mean 0 and standard deviation 0.1. By setting
, one simulates the process error in the robot’s motion command.
One considers the case where the robot moves from one floor
to another floor
. Recall that the robot can utilize elevators to move from one floor to another floor using the vision-based manipulating approach in [
7,
8].
The robot’s initial position is at floor. At floor, we have multiple objects, which do not move. We have an elevator which connects of floor with of floor. The goal of the robot is at floor. At floor, we have multiple objects.
The designated path is built using the building Voronoi graph (blue line segments in the following figures). Then, the robot moves along the designated path while utilizing reactive collision-avoidance control in
Section 5 if necessary.
We run 20 Monte–Carlo (MC) simulations to verify the robustness of the proposed approaches. The robot reaches its goal without collision in all MC simulations.
7.1. Handling Circular Objects
We first consider a case where multiple objects exists, such that each object has a circular shape.
Figure 6 shows the map of
floor with a static circular object. MapPoints are plotted with yellow points. The boundary of each static object is depicted with black circles. The robot plans the path from
to the elevator location
. Considering one MC simulation, the robot’s location at every
seconds is depicted as green diamonds.
Figure 7 is the enlarged figure of
Figure 6. See that the robot’s path is slightly deviated from Voronoi edges (blue line segments), in order to keep away from colliding with an object.
At
floor, we have multiple objects, among which four objects move with velocity
in m/s. Consider the map of
floor with moving circular objects. Considering one MC simulation,
Figure 8 indicates the robot’s location (green diamonds) at every
seconds. The boundary of an object at every
seconds is depicted with black circles. The robot plans the path from the elevator position
to the goal
.
Figure 9 is the enlarged figure of
Figure 8.
Considering the scenario in
Figure 8,
Figure 10a shows the relative distance between the robot and its closest
at each sample-step. Since the robot’s maximum sensing range is 20 m, the relative distance is bounded above by 20 m. A collision occurs at the moment when the distance between the robot and any
is shorter than 1 m. See that no collision happens during the robot maneuver. Recall that
defines the point on the path
, which is closest to the robot at sample-step
k.
Figure 10b indicates the relative distance between the robot and
at each sample-step
k. See that the robot keeps following the path
.
7.2. Handling Various Objects with Arbitrary Shapes
We next consider a case where each object has various shapes, other than a circle. Considering one MC simulation of
floor (static objects),
Figure 11 shows the robot’s location (green diamonds) at every
seconds. The boundary of each object with arbitrary shapes is depicted with black colors. The robot plans the path from
to the elevator location
.
Figure 12 is the enlarged figure of
Figure 11. See that the robot’s path slightly deviates from Voronoi edges (blue line segments), in order to keep away from colliding with an object.
At
floor, one has multiple objects, among which four objects move with velocity
in m/s. Considering one MC simulation of
floor with moving objects,
Figure 13 shows the robot’s location (green diamonds) at every
seconds. The boundary of each object at every
seconds is depicted with black circles. The robot plans the path from the elevator position
to the goal
.
Figure 14 is the enlarged figure of
Figure 13.
Considering the scenario in
Figure 13,
Figure 15a shows the relative distance between the robot and its closest
at each sample-step. Note that a collision occurs at the moment when the distance between the robot and any
is shorter than 1 m. See that no collision happens during the robot maneuver.
Figure 15b shows the relative distance between the robot and
at each sample-step
k. See that the robot keeps following the path
.