Next Article in Journal
Multi-Stage Corn-to-Syrup Process Monitoring and Yield Prediction Using Machine Learning and Statistical Methods
Previous Article in Journal
FaSS-MVS: Fast Multi-View Stereo with Surface-Aware Semi-Global Matching from UAV-Borne Monocular Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Autonomous Lunar Rover Localization while Fully Scanning a Bounded Obstacle-Rich Workspace

System Engineering Department, Sejong University, Seoul 05006, Republic of Korea
Sensors 2024, 24(19), 6400; https://doi.org/10.3390/s24196400
Submission received: 29 August 2024 / Revised: 24 September 2024 / Accepted: 1 October 2024 / Published: 2 October 2024
(This article belongs to the Special Issue Intelligent Control and Robotic Technologies in Path Planning)

Abstract

:
This article addresses the scanning path plan strategy of a rover team composed of three rovers, such that the team explores unknown dark outer space environments. This research considers a dark outer space, where a rover needs to turn on its light and camera simultaneously to measure a limited space in front of the rover. The rover team is deployed from a symmetric base station, and the rover team’s mission is to scan a bounded obstacle-rich workspace, such that there exists no remaining detection hole. In the team, only one rover, the hauler, can locate itself utilizing stereo cameras and Inertial Measurement Unit (IMU). Every other rover follows the hauler, while not locating itself. Since Global Navigation Satellite System (GNSS) is not available in outer space, the localization error of the hauler increases as time goes on. For rover’s location estimate fix, one occasionally makes the rover home to the base station, whose shape and global position are known in advance. Once a rover is near the station, it uses its Lidar to measure the relative position of the base station. In this way, the rover fixes its localization error whenever it homes to the base station. In this research, one makes the rover team fully scan a bounded obstacle-rich workspace without detection holes, such that a rover’s localization error is bounded by letting the rover home to the base station occasionally. To the best of our knowledge, this article is novel in addressing the scanning path plan strategy, so that a rover team fully scans a bounded obstacle-rich workspace without detection holes, while fixing the accumulated localization error occasionally. The efficacy of the proposed scanning and localization strategy is demonstrated utilizing MATLAB-based simulations.

1. Introduction

This article addresses the scanning path plan strategy of a rover team composed of three rovers, such that the team explores unknown dark outer space environments. Here, scanning path plan is to determine a rover’s path passing over every point of a given bounded obstacle-rich workspace, such that there is no detection hole. We consider the situation where an obstacle environment in outer space is not known in advance.
This research considers a dark outer space, where the Sun light and the Global Navigation Satellite System (GNSS) are not available. Thus, a rover needs to turn on its light and camera simultaneously to measure a limited space in front of the rover. The rover team is deployed from a symmetric base station, and the team’s mission is to scan a bounded obstacle-rich workspace, such that there is no remaining detection hole. This scenario was inspired by the NASA Space Research Challenge Phase 2 [1,2].
The purpose of the NASA Space Research Challenge Phase 2 was for the competitors’ virtual multi-robot teams to complete tasks for a lunar In-Situ Resource Utilization (ISRU) mission. This challenge allowed a virtual multi-robot team to operate autonomously for achieving these tasks. On 15 January 2021, NASA selected the Purdue-Hongik Team (the author is a member of the team) as a qualifying team to advance to the final virtual competition round [3]. In the competition, the rover team’s mission was to make a virtual robot team detect, excavate, and gather underground resources to the base station. For maximizing the efficiency of resource gathering, it is desirable to scan a bounded obstacle-rich workspace without detection holes. This paper was written inspired by the competition.
The NASA Space Research Challenge [1,2] considered a team of three rovers: the scouter, the hauler, and the excavator. See Figure 1 for an illustration of these three rovers. From the left to the right in this figure, one depicts the scouter, the hauler, and the excavator in this order.
In the NASA Space Research Challenge, the scouter, the hauler, and the excavator are used for the excavation and the collection of resources from the Moon [1]. Only the scouter has sensors for detecting resources buried in the ground using its local sensors. Once the scouter detects resources, the detected resources are excavated by the excavator, and the resources are dropped into the bin of the hauler. See Section 4.4 for controls of the excavator.
In order to enable cooperation among three rovers, communication link among three rovers needs to be established at all times. Thus, all three rovers need to move as a team. In the team, we assume that only the hauler can locate itself utilizing stereo cameras and Inertial Measurement Unit (IMU). Every other rover follows the hauler, while not locating itself. In this way, both the scouter and the excavator do not have to run localization process which can be computationally heavy. See Section 4.5 for controls of hauler following maneuvers.
The bounded obstacle-rich workspace of the NASA Space Research Challenge is presented in Figure 2. Rocks are presented as obstacles of MATLAB-based simulations (see Section 5). In Figure 2, the rover turns on its light in dark environments.
In order to scan a bounded obstacle-rich workspace, one requires localization of a rover. The lack of distinct features on lunar surface makes the localization problem further challenging and requires solutions that are catered specifically to the lunar rovers.
Moreover, the NASA Space Research Challenge provides a dark workspace, such that a rover needs to turn on its light and camera simultaneously to measure the space in front of the rover.
While the rover with cameras maneuvers, it can localize itself utilizing various localization methods in the literature. Considering many environmental factors (surface flatness or soil softness), odometry system of a rover can be used for the rover’s localization [4,5]. See that lunar surface in Figure 2 is not even, and localization error of a rover can increase considerably while a rover moves on hills or mountains. Therefore, localization error of odometry system must integrate as time goes on [4].
Visual-inertial simultaneous localization and mapping (VI-SLAM) [6,7] or monocular SLAM [8,9,10] can be utilized for rover localization in real-time. The authors of [11] addressed a real-time algorithm generating the 3D path of a monocular camera, moving through a previously unknown scene. VI-SLAM integrates camera and IMU for a rover’s localization [6,7]. But, the position accuracy, computational load, and memory burden of this VI-SLAM remain open challenges [7]. Under VI-SLAM, it is inevitable that localization error integrates as time elapses, as long as a loop closure is not observed.
VI-SLAM requires that the rover can detect reliable environmental features. Features widely utilized in VI-SLAM are ORB, SIFT, and SURF [6]. However, the lack of distinct features on lunar surface makes the localization problem further challenging and requires solutions that are catered specifically to the lunar rovers.
As far as we know, only [12] addressed the exploration of three rovers (the scouter, the hauler, and the excavator) in dark outer space environments. However, [12] didn’t present a strategy for covering the bounded obstacle-rich workspace, as in our research.
Considering the exploration of a dark outer space, [12] applied VI-SLAM, which is based on stereo cameras and IMU. By applying any SLAM approaches in dark outer space environments, it was inevitable that localization error integrates as time elapses [12]. This is due to the fact that distinct and stable features do not exist on lunar surface. Moreover, the rover can only measure a limited space in front of the rover by turning on its light and camera simultaneously. Considering outer space environments where the terrain is not flat (see Figure 2), tire slippage can further increase the localization error of SLAM algorithms. For fixing the integrated localization error of a rover, [12] occasionally makes the rover home to a symmetric base station, whose shape and global position are known in advance.
For example, Figure 3 plots a symmetric base station used in the NASA Space Research Challenge [1]. In Figure 3, the rover turns on its light in dark environments. In Figure 3, there is the symmetric base station to the right of the rover. It is assumed that the station’s global position and shape are known in advance.
Once the rover is near the base station, it turns on its Lidar to measure the relative position of the base station. The Lidar provides the relative position of the base station with respect to the rover. This relative position information provides the global localization for the rover. Suppose that the base station is at the origin, and that the relative position information of the base station with respect to the rover is R r . In this case, the global position of the rover is estimated as R r . In this way, the rover fixes its localization error whenever it homes to the base station. This homing strategy diminishes the integrated location error to zero.
This homing strategy is used in our research, so that localization error can be fixed occasionally. In our research, one makes the rover team scan a bounded obstacle-rich workspace without detection holes, such that a rover’s location error is bounded above. For diminishing the integrated location error to zero, one makes a rover home to the base station occasionally. We acknowledge that returning to the base station can slow the exploration progress.
In our paper, a rover senses a nearby obstacle based on stereo cameras or Lidar, as presented in [12,13]. In Figure 2, rocks are presented as obstacles of MATLAB-based simulations (see Section 5). A rover can use Lidar or stereo cameras to determine if it senses a large obstacle in front of it [12,13]. In our research, one makes the rover team fully scan a bounded obstacle-rich workspace without detection holes, such that a rover’s location error is bounded and that a rover avoids collision with other obstacles or other rovers.
To the best of our knowledge, our article is novel in addressing the scanning path plan, so that the rover team (the scouter, the hauler, and the excavator) fully scans a bounded obstacle-rich workspace while fixing the location error occasionally. In the rover team, only the hauler needs to locate itself utilizing stereo cameras and IMU. Moreover, only the scouter has the ability to scan the space searching for buried resources. The efficacy of the proposed scanning and localization strategy is demonstrated utilizing MATLAB-based simulations.
The organization of this article is as follows. Section 2 presents the literature review related to this research. Section 3 presents the assumptions and definitions in this research. Section 4 addresses the scanning path plan strategy introduced in this research. Section 5 addresses MATLAB-based simulation results to demonstrate the performance of the proposed strategy. Section 6 provides conclusions.

2. Literature Review

As far as we know, only [12] addressed the exploration of three rovers (the scouter, the hauler, and the excavator) in dark outer space environments. In order to make a rover detect every resource buried in the ground, we need to make the rover scan the bounded obstacle-rich workspace without detection holes. However, [12] didn’t address a strategy for covering the complete outer space workspace using a rover team. In other words, [12] didn’t address a strategy for planning the path of rover team, so that the bounded obstacle-rich workspace is scanned without detection holes. See Figure 10 in [12]. In the team, [12] assumed that every rover needs to locate itself utilizing stereo cameras and IMU. However, our research assumes that only the hauler locates itself utilizing stereo cameras and IMU.
There are many papers on scanning path plans to cover the entire workspace [14,15,16,17,18]. The authors of [14] addressed a literature review on the scanning path plan. The reference [15] addressed underwater sonar surveys using multiple sonar-equipped autonomous surface vehicles, such that vivid sonar images can be derived while covering the bounded obstacle-rich workspace completely. The authors of [16] addressed an algorithm for building a scanning plan over a holed rectilinear polygonal region. Reference [17] introduced a scanning path plan for scanning a spill according to aerial images from remote sensing. Reference [18] addressed the boustrophedon cellular decomposition, such that every boustrophedon cell is scanned with simple backwards and forwards maneuvering. References [19,20,21,22] applied lawnmower parallel path planning for scanning the given workspace.
However, to the best of our knowledge, no paper in the literature considered localization of a rover, while the rover fully scans the complete open space. In other words, other papers on scanning path plans didn’t consider the fact that a rover’s localization error will increase as time goes on, as long as the location estimate fix is not performed.
Several papers [23,24,25] showed that random maneuvers can be used to find resources in the workspace. For random search, Levy walks, brownian motion, or ballistic (straight line) search with random bounce can be utilized. Suppose that rovers move as a group and that they move randomly in the workspace, while avoiding collision with obstacles. As we use this random search strategy, complete coverage of a bounded obstacle-rich workspace can’t be assured. Moreover, the rovers’ location error integrates as time goes on. Using MATLAB-based simulations, we show that the proposed scanning strategy outperforms this random maneuver strategy.
To the best of our knowledge, our article is novel in addressing the scanning path plan, so that the rover team (the scouter, the hauler, and the excavator) fully scans a given workspace while fixing the location error occasionally. Only the hauler can locate itself utilizing stereo cameras and IMU. Moreover, only the scouter has the ability to scan the space searching for buried resources.

3. Assumptions and Definitions

A graph G is a set G = ( V ( G ) , E ( G ) ) , where V ( G ) is the vertex set and E ( G ) is a set of edges. Each edge, say e E ( G ) has its weight as w ( e ) : e Z + .
Let R stand for the hauler positioned at R R 2 . Let R s stand for the scouter positioned at R s R 2 . Let R e stand for the excavator positioned at R e R 2 .
Let r s stand for the sensing range limit of a rover. For planetary rovers, obstacles include rocks, loose soil, or regolith that causes the wheels to slip and potentially get stuck [26]. A rover senses a nearby obstacle based on stereo cameras or Lidar in dark space, as presented in [12,13]. A rover can detect an obstacle or another rover, that is within r s distance from the rover. By doing experiments using stereo cameras or Lidar in dark space, r s can be determined. We acknowledge that the light intensity of a rover can make effects on r s . As the light intensity increases, stereo cameras can detect an obstacle which is far from the rover. Thus, as the light intensity increases, r s increases as well. A colliding occurs in the situation where the relative distance between a rover and any obstacle boundary is less than r M (safety margin) where r M < r s .
The dynamics of the hauler R are represented as R ˙ = v . Furthermore, the speed limit of R is S, i.e., R ˙ S .
The dynamics of the scouter R s are represented as R ˙ s = v s . Note that one does not consider the non-holonomic motion constraint. Furthermore, the speed limit of the scouter R s is S s .
While the hauler R maneuvers in a dark outer space, it can localize itself utilizing VI-SLAM, which is based on stereo cameras and IMU [6,7,12]. Stereo cameras and IMU are used for localization of the hauler, while it moves. In the situation where the hauler uses Lidar only, its localization error may increase fast while it moves. The authors of [12] also used Lidar, IMU, and stereo cameras for rover localization. Note that among all rovers, only the hauler R can locate itself utilizing VI-SLAM. In our paper, both the scouter and the excavator do not have to run localization process, which can be computationally heavy.
It is assumed that the hauler R can measure its travel distance using its odometry systems. Whenever R travels Δ distance starting from the base station, R returns to the base station for initializing the location error to zero. This homing approach was also used in [12]. As R is near the base station whose global position is known in advance, R measures the base station using Lidar. The Lidar provides the relative position of the base station with respect to R. This relative position information provides the global localization for R.
For instance, suppose that the base station is at the origin, and that the relative position information of the base station with respect to R is R r . In this case, the global position of R is estimated as R r .
Whenever the hauler R travels Δ distance starting from the base station, the hauler R returns to the base station for initializing the location error to zero. Here, Δ can be decided by estimating the localization accuracy of R. The authors of [12] used the Extended Kalman Filter (EKF) to estimate the global location of a rover, in the situation where the rover uses both IMU and stereo cameras for locating itself. The EKF provides the error covariance matrix P for the location of the hauler R. Let P x and P y stand for the error covariance for the global coordinates (x,y) of the hauler R respectively. Then, u = P x + P y provides the uncertainty of the position estimate of R. Whenever u is bigger than a certain threshold, the hauler R returns to the base station for initializing the location error to zero. As the hauler R returns to the base station, its error covariance matrix is also re-initialized.
For fully covering the bounded obstacle-rich workspace, the hauler R forms g uidance sensors. Guidance sensors are used to guide the motions of the hauler R. As the hauler R forms a guidance sensor, the scouter R s fully scans the open space near R. Section 4.3 explains a strategy for fully scanning the open space near the hauler R.
Let v 1 and v 2 indicate two guidance sensors. Let v 1 and v 2 indicate the locations of v 1 and v 2 , respectively. For convenience, let l ( v 1 , v 2 ) stand for a straight line connecting two guidance sensors v 1 and v 2 .
Consider a circle with radius r M centered at v 1 . One says that l ( v 1 , v 2 ) is obstacle-free, in the situation where the circle does not cross obstacles while moving along l ( v 1 , v 2 ) . Let l ( v 1 , v 2 ) stand for the length of l ( v 1 , v 2 ) .
We define a graph structure, termed guidance graph I, as I = ( V ( I ) , E ( I ) ) . Here, V ( I ) is the vertex set and E ( I ) is the edge set. Every vertex in V ( I ) indicates a formed guidance sensor. In addition, every edge, say { v 1 , v 2 } E ( I ) , indicates that l ( v 1 , v 2 ) is obstacle-free and that l ( v 1 , v 2 ) < η r s . Here, η 1 is a constant. In MATLAB-based simulations, one uses η = 3 .
Based on the definition of E ( I ) , the hauler R evades collision with obstacles while moving along an edge in E ( I ) . Therefore, I is an obstacle-free topological map for R.
The hauler R can find a shortest path between any two vertices by accessing I. Increasing η can decrease the path length between any two vertices, as R searches for a shortest path using I. However, as η increases, the density of I increases, since more edges are generated by increasing η . This implies that the computational load increases as η increases.
See Figure 4 as an example. This figure plots I as η changes. In Figure 4, the length of r s is plotted as a dashed line at the bottom of the figure. If η = 1.1 , then I consists of three edges (three bold edges in Figure 4). However, if η = 3 , then I consists of six edges.
Whenever the hauler R forms a new guidance sensor, I is updated to contain the guidance sensor. The weight of { v 1 , v 2 } indicates l ( v 1 , v 2 ) . By accessing the weight for every edge in I, R can find an obstacle-free shortest path from one guidance sensor to another.
A point set s R 2 satisfying that R s = r s is called the footprint. Recall that r s stands for the sensing range limit of a rover.
An open footprint arc of a guidance sensor stands for an arc of a footprint along an open space without obstacles. A frontier of a guidance sensor is the subset of an open footprint arc, such that every point along the frontier is outside the footprint of any other guidance sensor. Thus, as every guidance sensor senses no frontier, every guidance sensor fully covers the bounded obstacle-rich workspace.
The footprint is discretized with discrete points, called footprintPnts. One generates D footprintPnts along the footprint. As D increases, one increases the density of footprintPnts along the footprint. In MATLAB-based simulations, one uses D = 6 . Among all footprintPnts, one searches for a footprintPnt satisfying the following conditions:
  • the footprintPnt is on a frontier.
  • the straight line connecting n and the footprintPnt is an obstacle-free path.
The found footprintPnt is termed the FrontierVertex of n. Let f ( n ) stand for the FrontierVertex of n. The hauler R utilizes a FrontierVertex as a “guidancePoint” for fully scanning the bounded obstacle-rich workspace, without colliding with obstacles.

4. Scanning Path Plan Strategy

This article studies how to fully scan an unknown workspace, as fast as possible utilizing the scouter and the hauler. The proposed scanning path planner appears in Algorithm 1. As the hauler senses no guidance sensor inside the footprint, the hauler forms a new guidance sensor. The hauler R calculates the shortest path in I from its current location to every guidance sensor with a FrontierVertex. Among these guidance sensors with a FrontierVertex, the hauler R finds the closest one, say f R .
The hauler tracks the shortest path to f R . While the hauler R approaches f R , both the excavator R e and the scouter R s follow R. Section 4.5 presents how to make a rover follow the hauler R.
One presents the path planner of the hauler R for arriving at f R . Suppose the hauler R plans to encounter guidance sensors n 1 n 2 . . . n e n d in this order. Here, n e n d is the guidance sensor that has f R as its FrontierVertex.
Since this path n 1 n 2 . . . n e n d is a subset of I, this path is obstacle-free for the hauler R. Furthermore, the length of each line segment along this path is shorter than r s . After the hauler R encounters n j where j e n d 1 , R approaches n j + 1 . Repeat this maneuvers until the hauler R encounters n e n d . After the hauler R encounters n e n d , R approaches f R .
Once the hauler R reaches f R , it forms another guidance sensor. Since a new guidance sensor is formed, several FrontierVertices inside the sensing range limit of the guidance sensor are removed. The hauler R then searches for the nearest FrontierVertex by accessing I. The hauler R repeats this maneuver, until it can’t detect any other FrontierVertex. Here, R accesses I for detection of a FrontierVertex.
Figure 5 illustrates the hauler forming a new guidance sensor. The obstacle boundaries are plotted as red curves. The path of the hauler R is shown as blue lines. The large dots along the path of the hauler R illustrate the guidance sensors formed by the hauler R. The footprint of each guidance sensor is plotted as a dotted circle. A frontier of the recently formed guidance sensor is plotted as a green curve. Two FrontierVertices are shown as two crosses along the frontier.
Algorithm 1 states that whenever the hauler R travels Δ distance starting from the base station, R tracks the shortest path in I for reaching the base station. Once the rover is near the base station, it turns on its Lidar to measure the relative position of the base station. In this way, the rover fixes its localization error whenever it homes to the base station. The authors of [12] showed that this homing strategy diminishes the integrated location error to zero.
We have the following theorem regarding the convergence of Algorithm 1.
Theorem 1. 
All rovers move under Algorithm 1. In the situation where the hauler R can’t sense any frontier, the complete open space is scanned by footprints.
Proof. 
Under the transposition rule, one proves the following statement: If an unexplored open space exists under Algorithm 1, then the hauler R senses a frontier associated to the unexplored open space.
Suppose an unexplored open space, say o p e n S , exists, while all rovers move under Algorithm 1. Recall that a frontier of a guidance sensor stands for a subset of an open footprint arc, such that every point along the frontier is outside the footprint of any other guidance sensor. Hence, there exists a guidance sensor, say n, which has a frontier along the boundary of this unexplored open space o p e n S . Thus, the hauler R senses this frontier associated to o p e n S .    □
Algorithm 1 Scanning path plan strategy
  • All rovers are at the base station;
  • repeat
  •     if the hauler R senses no guidance sensor inside the footprint then
  •           S c a n ( R ) ;
  •     end if
  •     if  f R is empty then
  •           F i n d F r o n t i e r V e r t e x ( R ) ;
  •     end if
  •     if the hauler R encounters f R  then
  •           S c a n ( R ) ;
  •           F i n d F r o n t i e r V e r t e x ( R ) ;
  •     end if
  • until there is no FrontierVertex for every guidance sensor;

4.1. Collision Evasion of the Hauler R while Approaching f R

In Algorithm 1, the hauler R tracks a path for reaching f R . Let { n 1 n 2 n e n d } stand for the order of guidance sensors along the path for reaching f R . After meeting n j ( j e n d 1 ), the hauler R approaches n j + 1 . One shows that collision evasion is guaranteed, while the hauler R approaches f R .
Note that { n 1 n 2 n e n d } is the subset of I. According to the definition of E ( I ) , this path is an obstacle-free path. After meeting n e n d , the hauler R approaches f R . According to the definition of a FrontierVertex, the hauler R evades colliding with obstacles while it moves from n e n d to f R .

4.2. Controller of the Hauler

In this subsection, one addresses the controller of the hauler. One derives the controller for the hauler R in discrete-time systems.
Suppose W is the next guidancePoint that the hauler R will encounter along the path to f R . Let R ( k ) stand for the 2D position of the hauler R at sample-stamp k. Let v ( k ) stand for the velocity of the hauler R at sample-stamp k. The dynamics of the hauler R are represented as
R ( k + 1 ) = R ( k ) + v ( k ) T = R ( k ) + β g ( k ) T
where g ( k ) = W R ( k ) . In Equation (1), T stands for the sampling interval. In Equation (1), β > 0 is a tuning parameter. Equation (1) implies that the hauler R approaches W with a speed proportional to g ( k ) .
The speed limit of the hauler R is S. Therefore, one handles the situation where β g ( k ) > S in Equation (1). In the situation where β g ( k ) > S , Equation (1) is replaced by
R ( k + 1 ) = R ( k ) + v ( k ) T = R ( k ) + S g ( k ) g ( k ) T .

4.3. Control Law of the Scouter

Only the scouter has sensors for detecting resources buried in the ground using its local sensors. Thus, only the scouter has the ability to scan the space searching for buried resources.
Whenever the hauler R forms a guidance sensor in Algorithm 1, the scouter R s fully scans the open space near R. One addresses how the scouter R s fully scans the open space near the hauler R.
Recall that every footprint has radius r s . Let ScanCircle stand for a circle with radius r s 3 centered at the hauler R. The area inside this circle stands for the area that will be fully scanned by the scouter R s .
The ScanCircle is discretized as evenly spaced points, called ScanCirclePnts. Suppose there are B ScanCirclePnts along the ScanCircle. Here, B > 0 is a tuning constant in this research. As B increases, one increases the density of ScanCirclePnts along the ScanCircle. See Figure 6 for an illustration. In Figure 6, ScanCirclePnts are depicted along the ScanCircle with radius r s 3 . In this figure, D = 6 footprintPnts are depicted with red dots along the footPrint centered at the hauler R. There is a rectangular obstacle (blue box) in Figure 6.
Let p 1 stand for the ScanCirclePnt, where l ( R , R s ) crosses the footprint. One then enumerates all ScanCirclePnts starting from p 1 , as plotted in Figure 6.
The scouter R s maneuvers backwards and forwards between p i ( i { 1 , 2 , , B } ) and the hauler R to fully scan all points on l ( p i , R ) . For fully scanning the open space near the hauler R, the scouter R s visits the ScanCirclePnts and R in the following order: p 1 R p 2 R p B R .
Let v s ( k ) stand for the velocity of the scouter R s at sample-stamp k. The dynamics of the scouter R s while approaching p i are represented as
R s ( k + 1 ) = R s ( k ) + v s ( k ) T ,
where
v s ( k ) = S s p i R p i R .
Equation (4) implies that the scouter R s moves in the direction of p i R with its speed limit S s . The scouter R s keeps moving in the direction of p i R until R s R = r s is satisfied. Once R s R = r s is satisfied, the scouter R s gets back to the hauler R.
While the scouter R s approaches p i , the scouter R s may be too close to an obstacle, such as a rectangular obstacle in Figure 6. Whenever the scouter R s gets too close to an obstacle, it gets back to the hauler R.
Once the scouter R s encounters the hauler R, the scouter R s begins approaching p i + 1 . This maneuver repeats, while i increases in the following order: { 1 2 B } . Once i becomes B, the open space near the hauler R has been scanned completely.

4.4. Control Law of the Excavator

Suppose that the scouter R s detects buried resources while maneuvering backwards and forwards between p j ( j { 1 , 2 , , B } ) and L in Algorithm 2. Whenever the scouter R s detects a place where resources are buried, then the hauler R comes to the place, while the excavator R e follows the hauler R.
Algorithm 2   S c a n ( R )
  • The hauler R forms a guidance sensor at its location, say L;
  • By maneuvering backwards and forwards from L, the scouter R s fully scans the open space near the hauler R (see Section 4.3 for controls of the scouter);
  • if The scouter R s senses a place where resources are buried then
  •    The hauler R arrives at the place, while the excavator R e follows R;
  •    The excavator R e excavates the resources, and drops the resources into the bin of R (see Section 4.4 for controls of the excavator);
  •    The hauler R gets back to L, while the excavator R e follows the hauler R (see Section 4.5 for controls of following maneuvers);
  • end if
Thereafter, the excavator R e excavates the found resources under digging controls in [12,27,28,29]. Then, the excavator R e drops the found resources into the bin of hauler R under excavator controls in [12,27,28,29]. Note that developing digging controls is not within the scope of this paper.
Then, the hauler R gets back to L, while the excavator R e follows the hauler R. For making a rover follow the hauler R, one can use deep neural networks for hauler detection [12,30], as well as robot tracking controls in [31,32,33,34,35]. Section 4.5 presents how to make a rover follow the hauler R.
In this way, the excavator can dig all resources surrounding L in Algorithm 2. Suppose that natural resources are concentrated in a specific region surrounding L. In this case, the hauler can become full, while the excavator drops the resources into the bin of the hauler.
Whenever the hauler is full of resources, it transfers the resources to the base station. While the hauler maneuvers for transferring the resources to the base station, both the scouter and the excavator wait at their positions. After the resource transfer is done, the hauler comes back to the waiting positions of both the scouter and the excavator. Thereafter, the rover team restarts scanning of the space.

4.5. Following Maneuvers of a Rover

For making a rover follow the hauler, we need to train the rover’s stereo camera using deep neural networks [12,30]. Object detection strategy using neural networks was also used in [12]. In [12], the networks were trained to identify various classes, such as processing plant, rocks, visible volatiles above or partially above the surface, craters, and the other rovers. Once the networks are trained, the rover can detect the hauler’s image in the camera view. Also, the rover can detect the relative distance to the hauler using the stereo camera image.
Suppose that the hauler is detected in the rover’s camera image. For making a rover follow the hauler, one can use the tracking controls in [31,32,33,34,35]. References [31,32,33,34,35] designed tracking control laws to let a rover follow another object based on local sensing measurements.

5. Matlab-Based Simulations

To the best of our knowledge, our article is novel in addressing the scanning path plan, so that the rover team fully scans a given workspace while fixing the location error occasionally. One demonstrates the efficacy of Algorithm 1 through MATLAB-based simulations.
The simulation parameters are set as follows. The sensing range r s is set as 3 distance units. The sampling time interval T is set as 0.1 s.
One sets r M = 0.5 distance unit. The speed limit of a rover is 10 distance units per second. One sets D, the number of footprintPnts on a footprint, as 6. Furthermore, one sets B, the number of ScanCirclePnts on a footprint, as 14.

5.1. Scenario 1

In Scenario 1, we generate obstacle environments, inspired by Figure 2. Rocks in Figure 2 are presented as obstacles in Scenario 1. Reference [12] didn’t address a strategy for planning the path of rover team, so that the given workspace is fully scanned without detection holes.

5.1.1. The Ideal Situation where There Is no Localization Error

In Scenario 1, we generate obstacle environments, inspired by Figure 2. Considering Scenario 1, one first simulates the ideal situation where there is no localization error. One sets Δ = in Algorithm 3.
Regarding Scenario 1, Figure 7 illustrates a rover’s path constructed through Algorithm 1. Obstacle boundaries are shown with thick red curves. In Figure 7, O b s illustrates the obstacles. The red rectangle represents the workspace boundary. At sample-stamp 0, the position of the hauler R is ( 1 , 1 ) , which is the position of the base station. The path of the hauler R is marked as a black circle. Backwards and forwards maneuvering of the scouter R s (blue lines in Figure 7) is used to scan a ScanCircle entirely. Figure 7 shows that footprints of all guidance sensors cover the complete open space.
Regarding the scenario in Figure 7, the upper subplot of Figure 8 illustrates the 2D coordinates of the hauler with respect to time, while the lower subplot of Figure 8 illustrates the relative distance between the hauler R and its nearest obstacle boundary with respect to time. In the scanning process, the hauler’s distance to the nearest obstacle boundary always exceeds r M = 0.5 distance units.
Algorithm 3   F i n d F r o n t i e r V e r t e x ( R )
  • The hauler R searches for the nearest FrontierVertex by accessing I;
  • The nearest FrontierVertex is set as f R ;
  • The hauler R tracks the shortest path in I for reaching f R ;
  • The excavator R e and the scouter R s follow the hauler R (see Section 4.5 for controls of following maneuvers);
  • if the hauler R has traveled Δ distance starting from the base station then
  •    The hauler R tracks the shortest path in I for reaching the base station, while the scouter R s and the excavator R e follow R (see Section 4.5 for controls of following maneuvers);
  •    Set f R = ;
  • end if

5.1.2. The Situation where There Is Localization Error

In Scenario 1, we generate obstacle environments, inspired by Figure 2. Considering Scenario 1, one next considers the situation where there is localization error. Due to the localization error, the dynamics of the hauler R may not be described using (1). Regarding the environmental perturbation (e.g., wheel slippage), the true dynamics of the hauler R are represented as
R p ( k + 1 ) = R p ( k ) + v ( k ) T + n
where n stands for the process noise due to the environmental perturbation. Here, R p is used instead of R , in order to emphasize that R p is the perturbed rover position. In simulations, n is a Gaussian distribution with mean 0 and variance 0.00005 . Since the hauler R can’t access the process noise term n , the location estimation of the hauler R is updated using (1) instead of (5).
At sample-stamp 0, R p ( 0 ) = R ( 0 ) = ( 1 , 1 ) which is the position of the base station. Suppose that the hauler R returns to the base station at sample-stamp k. Then, one sets R p ( k ) = R ( k ) = ( 1 , 1 ) . This implies that the hauler’s location estimation error is reset to zero at sample-stamp k.
One sets Δ = 100 in Algorithm 3. Regarding Scenario 1 with localization error, Figure 9 illustrates a rover’s path constructed through Algorithm 1. At sample-stamp 0, the position of the hauler R is ( 1 , 1 ) which presents the position of the base station. The path of the hauler R is marked as a black circle. The hauler R occasionally homes to the base station by applying Δ = 100 in Algorithm 3. The homing maneuver of the hauler R is marked with green diamonds in Figure 9. Backwards and forwards maneuvering of the scouter R s (blue lines in Figure 9) is used to scan a ScanCircle entirely. Figure 9 shows that footprints of all guidance sensors cover the complete open space.
Regarding the scenario in Figure 9, (a) subplot of Figure 10 illustrates the 2D coordinates of the hauler with respect to time, while (b) subplot illustrates the relative distance between the hauler R and its nearest obstacle boundary with respect to time. In the scanning process, the hauler’s distance to the nearest obstacle boundary always exceeds r M = 0.5 distance units. Compared to Figure 8, the scanning time increases considerably, since the hauler R needs to home to the base station occasionally. (c) subplot presents the localization error R ( k ) R p ( k ) as k varies. See that the location error is reset to zero, whenever the hauler R returns to the base station.

5.1.3. Comparison with a Random Maneuver Strategy

As far as we know, only [12] addressed the exploration of three rovers (the scouter, the hauler, and the excavator), as presented in our study. However, [12] didn’t present a strategy for fully scanning the bounded obstacle-rich workspace, as presented in our study.
For comparison, one simulates a group of rovers, so that they randomly maneuver in the bounded obstacle-rich workspace. References [23,24,25] showed that random maneuvers can be used to find resources in the workspace. Suppose that rovers move as a group and that they move randomly in the workspace, while avoiding collision with obstacles. Whenever the distance between a rover and an obstacle is less than 2 S s T , the rover moves away from the obstacle with speed S s . Whenever the hauler R moves for S s T distance, the scouter R s fully scans the open space near R using the clearing strategy in Section 4.3. Regarding Scenario 1 with localization error, Figure 11 illustrates a rover’s path constructed through this random maneuver strategy. See that the bounded obstacle-rich workspace can’t be fully scanned by a random maneuver. The simulation ends after 150 s pass.
Figure 11 shows that rovers under the random maneuver strategy can’t explore the bounded obstacle-rich workspace, since random maneuver does not assure that they cover the bounded obstacle-rich workspace. We argue that this random maneuver strategy can’t make the rovers move through a narrow passage. We show that the proposed scanning strategy in Algorithm 3 outperforms this random maneuver strategy.
Regarding the scenario in Figure 11, (a) subplot of Figure 12 illustrates the 2D coordinates of the hauler with respect to time, while (b) subplot illustrates the relative distance between the hauler R and its nearest obstacle boundary with respect to time. In the scanning process, the hauler’s distance to the nearest obstacle boundary always exceeds r M = 0.5 distance units. (c) subplot presents the localization error R ( k ) R p ( k ) as k varies. While the rovers randomly maneuver, they can get close to the base station. In this case, their location error is reset by chance.

5.2. Scenario 2

In Scenario 2, we consider environments with large obstacles. Regarding Scenario 2 with localization error, the dynamics of the hauler R are represented using (5). At sample-stamp 0, R p ( 0 ) = R ( 0 ) = ( 1 , 1 ) which is the position of the base station. Suppose that the hauler R returns to the base station at sample-stamp k. Then, one sets R p ( k ) = R ( k ) = ( 1 , 1 ) .
One sets Δ = 100 in Algorithm 3. Regarding Scenario 2 with localization error, Figure 13 illustrates a rover’s path constructed through Algorithm 1. At sample-stamp 0, the position of the hauler R is ( 1 , 1 ) which presents the position of the base station. The path of the hauler R is marked as a black circle. The hauler R occasionally homes to the base station. The homing maneuver of the hauler R is marked with green diamonds in Figure 13. Backwards and forwards maneuvering of the scouter R s (blue lines in Figure 13) is used to scan a ScanCircle entirely.
Regarding the scenario in Figure 13, (a) subplot of Figure 14 represents the 2D coordinates of the hauler with respect to time, while (b) subplot illustrates the relative distance between the hauler R and its nearest obstacle boundary with respect to time. In the scanning process, the hauler’s distance to the nearest obstacle boundary always exceeds r M = 0.5 distance units. (c) subplot presents the localization error R ( k ) R p ( k ) as k varies. See that the location error is reset to zero, whenever the hauler R returns to the base station.

Comparison with a Random Maneuver Strategy

For comparison, one simulates a group of rovers, so that they randomly maneuver in the bounded obstacle-rich workspace. Refer to Section 5.1.3 for the random maneuver strategy.
Regarding Scenario 2 with localization error, Figure 15 illustrates a rover’s path constructed through this random maneuver strategy. See that the bounded obstacle-rich workspace can’t be fully scanned by a random maneuver. The simulation ends after 150 s pass.
Figure 15 shows that rovers under the random maneuver strategy can’t explore the bounded obstacle-rich workspace, since random maneuver does not assure that they cover the bounded obstacle-rich workspace. We show that the proposed scanning strategy in Algorithm 3 outperforms this random maneuver strategy.
Regarding the scenario in Figure 15, (a) subplot of Figure 16 illustrates the 2D coordinates of the hauler with respect to time, while (b) subplot illustrates the relative distance between the hauler R and its nearest obstacle boundary with respect to time. In the scanning process, the hauler’s distance to the nearest obstacle boundary always exceeds r M = 0.5 distance units. (c) subplot presents the localization error R ( k ) R p ( k ) as k varies. While the rovers randomly maneuver, they can get close to the base station. In this case, their location error is reset by chance.

6. Conclusions

This article addresses the scanning path plan strategy of a rover team (the scouter, the hauler, and the excavator) exploring unknown dark outer space environments. The rover team is deployed from a symmetric base station, and the team’s goal is to scan a bounded obstacle-rich workspace entirely. For location estimate fix, the rover occasionally homes to the base station, whose shape and global position are known in advance. Once a rover is near the station, it utilizes its Lidar to measure the relative position of the base station. In this manner, the rover fixes its localization error whenever it homes to the base station.
This article makes the rover team fully scan the given workspace without detection holes, such that a rover’s location error is bounded by letting the rover home to the base station occasionally. To the best of our knowledge, this article is novel in addressing the scanning path plan strategy, so that a rover team fully scans a bounded outer space workspace without detection holes, while fixing the location error occasionally. The efficacy of the proposed scanning and location approach is demonstrated utilizing MATLAB-based simulations. In the future, we will do experiments using real ground robots, in order to verify the proposed scanning and location approach rigorously.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (Grant Number: 2022R1A2C1091682).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. NASA. Space Robotics Challenge. 2021. Available online: https://spacecenter.org/space-robotics-challenge/space-robotics-challenge-phase-2/ (accessed on 30 September 2024).
  2. The Artemis Plan, NASA’s Lunar Exploration Program Overview, J. Space Robotics Challenge. Website, 2021. Available online: https://www.nasa.gov/sites/default/files/atoms/files/artemis_plan-20200921.pdf (accessed on 30 September 2024).
  3. University, P. NASA Space Robotics Challenge Phase 2. 2024. Available online: https://polytechnic.purdue.edu/nasa-src2 (accessed on 1 October 2024).
  4. Yamauchi, G.; Nagatani, K.; Hashimoto, T.; Fujino, K. Slip-compensated odometry for tracked vehicle on loose and weak slope. ROBOMECH J. 2017, 4, 27. [Google Scholar] [CrossRef]
  5. Yoshida, K.; Hamano, H. Motion dynamics of a rover with slip-based traction model. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292), Washington, DC, USA, 11–15 May 2002; Volume 3, pp. 3155–3160. [Google Scholar]
  6. Chen, C.; Zhu, H.; Li, M.; You, S. A Review of Visual-Inertial Simultaneous Localization and Mapping from Filtering-Based and Optimization-Based Perspectives. Robotics 2018, 7, 45. [Google Scholar] [CrossRef]
  7. Lynen, S.; Zeisl, B.; Aiger, D.; Bosse, M.; Hesch, J.; Pollefeys, M.; Siegwart, R.; Sattler, T. Large-scale, real-time visual–inertial localization revisited. Int. J. Robot. Res. 2020, 39, 1061–1084. [Google Scholar] [CrossRef]
  8. Strasdat, H.; Montiel, J.M.M.; Davison, A.J. Real-time monocular SLAM: Why filter? In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Washington, DC, USA, 28–30 May 2010; pp. 2657–2664. [Google Scholar]
  9. Eade, E.; Drummond, T. Scalable Monocular SLAM. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–2 June 2006; Volume 1, pp. 469–476. [Google Scholar]
  10. Mur-Artal, R.; Montiel, J.M.M.; Tardós, J.D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
  11. Davison, A.J.; Reid, I.D.; Molton, N.D.; Stasse, O. MonoSLAM: Real-Time Single Camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [Google Scholar] [CrossRef] [PubMed]
  12. Kilic, C.; Martinez R., B.; Tatsch, C.A.; Beard, J.; Strader, J.; Das, S.; Ross, D.; Gu, Y.; Pereira, G.A.S.; Gross, J.N. NASA Space Robotics Challenge 2 Qualification Round: An Approach to Autonomous Lunar Rover Operations. IEEE Aerosp. Electron. Syst. Mag. 2021, 36, 24–41. [Google Scholar] [CrossRef]
  13. Liu, Z.; Guo, S.; Yu, F.; Hao, J.; Zhang, P. Improved A* Algorithm for Mobile Robots under Rough Terrain Based on Ground Trafficability Model and Ground Ruggedness Model. Sensors 2024, 24, 4884. [Google Scholar] [CrossRef] [PubMed]
  14. Galceran, E.; Carreras, M. A survey on coverage path planning for robotics. Robot. Auton. Syst. 2013, 61, 1258–1276. [Google Scholar] [CrossRef]
  15. Kim, J. Multi-robot global sonar survey in the presence of strong currents. Ocean. Eng. 2019, 188, 106316. [Google Scholar] [CrossRef]
  16. Agarwal, A.; Meng-Hiot, L.; Lip, C.W. A divide and conquer algorithm for rectilinear region coverage. In Proceedings of the IEEE Conference on Robotics, Automation and Mechatronics, New Orleans, LA, USA, 26 April–1 May 2004; Volume 2, pp. 996–1001. [Google Scholar] [CrossRef]
  17. Luo, S.; Kim, J.; Min, B.C. Asymptotic Boundary Shrink Control with Multirobot Systems. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 591–605. [Google Scholar] [CrossRef]
  18. Choset, H.; Pignon, P. Coverage Path Planning: The Boustrophedon Cellular Decomposition. In Field and Service Robotics; Zelinsky, A., Ed.; Springer: London, UK, 1998; pp. 203–209. [Google Scholar]
  19. Williams, D.P.; Baralli, F.; Micheli, M.; Vasoli, S. Adaptive underwater sonar surveys in the presence of strong currents. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 2604–2611. [Google Scholar]
  20. Warren, C.W. A technique for autonomous underwater vehicle route planning. IEEE J. Ocean. Eng. 1990, 15, 199–204. [Google Scholar] [CrossRef]
  21. Williams, D.P. On optimal AUV track-spacing for underwater mine detection. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Yokohama, Japan, 13–17 May 2010; pp. 4755–4762. [Google Scholar]
  22. Fransman, J.; Sijs, J.; Dol, H.; Theunissen, E.; De Schutter, B. Distributed constraint optimization for autonomous multi AUV mine counter-measures. In Proceedings of the OCEANS 2018 MTS/IEEE Charleston, Charleston, SC, USA, 22–25 October 2018; pp. 1–7. [Google Scholar]
  23. Wosniack, M.E.; Santos, M.C.; Raposo, E.P.; Viswanathan, G.M.; da Luz, M. The evolutionary origins of Lévy walk foraging. PLoS Comput. Biol. 2017, 13, e1005774. [Google Scholar] [CrossRef] [PubMed]
  24. Viswanathan, G.M.; Buldyrev, S.V.; Havlin, S.; da Luz, M.G.E.; Raposo, E.P.; Stanley, H.E. Optimizing the success of random searches. Nature 1999, 401, 911–914. [Google Scholar] [CrossRef] [PubMed]
  25. Raposo, E.; Buldyrev, S.; da Luz, M.G.D.; Santos, M.C.S.; Stanley, H.; Viswanathan, G. Dynamical robustness of Lévy search strategies. Phys. Rev. Lett. 2003, 91, 240601. [Google Scholar] [CrossRef] [PubMed]
  26. Chatterjee, S.; Isaac, K.K. Terramechanics and Path Finding. IFAC-PapersOnLine 2016, 49, 183–188. [Google Scholar] [CrossRef]
  27. Johns, R.L.; Wermelinger, M.; Mascaro, R.; Jud, D.; Hurkxkens, I.; Vasey, L.; Chli, M.; Gramazio, F.; Kohler, M.; Hutter, M. A framework for robotic excavation and dry stone construction using on-site materials. Sci. Robot. 2023, 8, eabp9758. [Google Scholar] [CrossRef] [PubMed]
  28. Zhang, L.; Zhao, J.; Long, P.; Wang, L.; Qian, L.; Lu, F.; Song, X.; Manocha, D. An autonomous excavator system for material loading tasks. Sci. Robot. 2021, 6, eabc3164. [Google Scholar] [CrossRef]
  29. Jud, D.; Kerscher, S.; Wermelinger, M.; Jelavic, E.; Egli, P.; Leemann, P.; Hottiger, G.; Hutter, M. HEAP-The autonomous walking excavator. Autom. Constr. 2021, 129, 103783. [Google Scholar] [CrossRef]
  30. Bansal, N. Yolo v3. 2022. Available online: https://towardsdatascience.com/object-detection-using-yolov3-and-opencv-19ee0792a420 (accessed on 1 October 2024).
  31. Lim, D.; Kim, J.; Kim, H. Efficient robot tracking system using single-image-based object detection and position estimation. ICT Express 2023, 10, 125–131. [Google Scholar] [CrossRef]
  32. Kobilarov, M.; Sukhatme, G.; Hyams, J.; Batavia, P. People tracking and following with mobile robot using an omnidirectional camera and a laser. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, Orlando, FL, USA, 15–19 May 2019; pp. 557–562. [Google Scholar]
  33. Kautsar, S.; Widiawan, B.; Etikasari, B.; Anwar, S.; Yunita, R.D.; Syai’in, M. A Simple Algorithm for Person-Following Robot Control with Differential Wheeled based on Depth Camera. In Proceedings of the 2019 International Conference on Computer Science, Information Technology, and Electrical Engineering (ICOMITEE), Jember, Indoneisa, 16–17 October 2019; pp. 114–117. [Google Scholar]
  34. Mir-Nasiri, N. Camera-based 3D Object Tracking and Following Mobile Robot. In Proceedings of the 2006 IEEE Conference on Robotics, Automation and Mechatronics, Bangkok, Thailand, 7–9 June 2006; pp. 1–6. [Google Scholar]
  35. Shojaei, K.; Chatraei, A. Robust platoon control of underactuated autonomous underwater vehicles subjected to nonlinearities, uncertainties and range and angle constraints. Appl. Ocean. Res. 2021, 110, 102594. [Google Scholar] [CrossRef]
Figure 1. The NASA Space Research Challenge [1] considered a team of three rovers: the scouter, the hauler, and the excavator. From the left to the right in this figure, one depicts the scouter, the hauler, and the excavator in this order.
Figure 1. The NASA Space Research Challenge [1] considered a team of three rovers: the scouter, the hauler, and the excavator. From the left to the right in this figure, one depicts the scouter, the hauler, and the excavator in this order.
Sensors 24 06400 g001
Figure 2. The bounded obstacle-rich workspace of the NASA Space Research Challenge. Rocks are presented as obstacles of MATLAB-based simulations (see Section 5). The rover turns on its light in dark environments.
Figure 2. The bounded obstacle-rich workspace of the NASA Space Research Challenge. Rocks are presented as obstacles of MATLAB-based simulations (see Section 5). The rover turns on its light in dark environments.
Sensors 24 06400 g002
Figure 3. This figure plots a symmetric base station used in the NASA Space Research Challenge [1]. The rover turns on its light in dark environments. To the right of the rover in this figure, there is the symmetric base station. It is assumed that the station’s global position and shape are known in advance.
Figure 3. This figure plots a symmetric base station used in the NASA Space Research Challenge [1]. The rover turns on its light in dark environments. To the right of the rover in this figure, there is the symmetric base station. It is assumed that the station’s global position and shape are known in advance.
Sensors 24 06400 g003
Figure 4. This figure represents I as η changes. The length of r s is plotted as a dashed line at the bottom of the figure. If η = 1.1 , then I consists of three edges (three bold edges in the figure). However, if η = 3 , then I consists of six edges.
Figure 4. This figure represents I as η changes. The length of r s is plotted as a dashed line at the bottom of the figure. If η = 1.1 , then I consists of three edges (three bold edges in the figure). However, if η = 3 , then I consists of six edges.
Sensors 24 06400 g004
Figure 5. The hauler R forming a new guidance sensor. In this figure, R, R e , and R s are shown as circular robots. The obstacle boundaries are plotted as red curves. The path of the hauler R is shown as blue lines. The large dots along the path of the hauler R illustrate the guidance sensors formed by the hauler R. The footprint of each guidance sensor is represented as a dotted circle. A frontier of the recently formed guidance sensor is shown as a green curve. Two FrontierVertices are shown as two crosses along the frontier.
Figure 5. The hauler R forming a new guidance sensor. In this figure, R, R e , and R s are shown as circular robots. The obstacle boundaries are plotted as red curves. The path of the hauler R is shown as blue lines. The large dots along the path of the hauler R illustrate the guidance sensors formed by the hauler R. The footprint of each guidance sensor is represented as a dotted circle. A frontier of the recently formed guidance sensor is shown as a green curve. Two FrontierVertices are shown as two crosses along the frontier.
Sensors 24 06400 g005
Figure 6. ScanCirclePnts are depicted along the ScanCircle with radius r s 3 . In this figure, D = 6 footprintPnts are marked with red dots along the footPrint centered at the hauler R. There is a rectangular obstacle (blue box).
Figure 6. ScanCirclePnts are depicted along the ScanCircle with radius r s 3 . In this figure, D = 6 footprintPnts are marked with red dots along the footPrint centered at the hauler R. There is a rectangular obstacle (blue box).
Sensors 24 06400 g006
Figure 7. Scenario 1. In this scenario, we generate obstacle environments, inspired by Figure 2. There is no localization error. One sets Δ = in Algorithm 3. Obstacle boundaries are shown with thick red curves. The red rectangle illustrates the workspace boundary. The path of the hauler R is depicted as a black circle. Backwards and forwards maneuvering of the scouter R s (blue lines) is used to scan a ScanCircle entirely.
Figure 7. Scenario 1. In this scenario, we generate obstacle environments, inspired by Figure 2. There is no localization error. One sets Δ = in Algorithm 3. Obstacle boundaries are shown with thick red curves. The red rectangle illustrates the workspace boundary. The path of the hauler R is depicted as a black circle. Backwards and forwards maneuvering of the scouter R s (blue lines) is used to scan a ScanCircle entirely.
Sensors 24 06400 g007
Figure 8. Scenario 1. Regarding the scenario in Figure 7, the upper subplot represents the 2D coordinates of the hauler with respect to time, while the lower subplot illustrates the relative distance between the hauler R and its closest obstacle boundary with respect to time. In the scanning process, the hauler’s distance to the nearest obstacle boundary always exceeds r M = 0.5 distance units.
Figure 8. Scenario 1. Regarding the scenario in Figure 7, the upper subplot represents the 2D coordinates of the hauler with respect to time, while the lower subplot illustrates the relative distance between the hauler R and its closest obstacle boundary with respect to time. In the scanning process, the hauler’s distance to the nearest obstacle boundary always exceeds r M = 0.5 distance units.
Sensors 24 06400 g008
Figure 9. Scenario 1 with localization error. In Scenario 1, we generate obstacle environments, inspired by Figure 2. One sets Δ = 100 in Algorithm 3. The path of the hauler R is marked as a black circle. The hauler R occasionally homes to the base station by applying Δ = 100 in Algorithm 3. The homing maneuver of the hauler R is depicted with green diamonds. Backwards and forwards maneuvering of the scouter R s (blue lines) is used to scan a ScanCircle entirely.
Figure 9. Scenario 1 with localization error. In Scenario 1, we generate obstacle environments, inspired by Figure 2. One sets Δ = 100 in Algorithm 3. The path of the hauler R is marked as a black circle. The hauler R occasionally homes to the base station by applying Δ = 100 in Algorithm 3. The homing maneuver of the hauler R is depicted with green diamonds. Backwards and forwards maneuvering of the scouter R s (blue lines) is used to scan a ScanCircle entirely.
Sensors 24 06400 g009
Figure 10. Scenario 1 with localization error. Regarding the scenario in Figure 9, (a) subplot plots the 2D coordinates of the hauler with respect to time, while (b) subplot illustrates the relative distance between the hauler R and its closest obstacle boundary with respect to time. In the scanning process, the hauler’s distance to the nearest obstacle boundary always exceeds r M = 0.5 distance units. Compared to Figure 8, the scanning time increases considerably, since the hauler R needs to home to the base station occasionally. (c) subplot presents the localization error R ( k ) R p ( k ) as k varies.
Figure 10. Scenario 1 with localization error. Regarding the scenario in Figure 9, (a) subplot plots the 2D coordinates of the hauler with respect to time, while (b) subplot illustrates the relative distance between the hauler R and its closest obstacle boundary with respect to time. In the scanning process, the hauler’s distance to the nearest obstacle boundary always exceeds r M = 0.5 distance units. Compared to Figure 8, the scanning time increases considerably, since the hauler R needs to home to the base station occasionally. (c) subplot presents the localization error R ( k ) R p ( k ) as k varies.
Sensors 24 06400 g010
Figure 11. Scenario 1 with localization error under the random maneuver strategy. In Scenario 1, we generate obstacle environments, inspired by Figure 2. The path of the hauler R is marked as a black circle. Backwards and forwards maneuvering of the scouter R s (blue lines) is used to scan a ScanCircle entirely. See that the bounded obstacle-rich workspace can’t be fully scanned by a random maneuver. The simulation ends after 150 s pass.
Figure 11. Scenario 1 with localization error under the random maneuver strategy. In Scenario 1, we generate obstacle environments, inspired by Figure 2. The path of the hauler R is marked as a black circle. Backwards and forwards maneuvering of the scouter R s (blue lines) is used to scan a ScanCircle entirely. See that the bounded obstacle-rich workspace can’t be fully scanned by a random maneuver. The simulation ends after 150 s pass.
Sensors 24 06400 g011
Figure 12. Scenario 1 with localization error under the random maneuver strategy. Regarding the scenario in Figure 11, (a) subplot plots the 2D coordinates of the hauler with respect to time, while (b) subplot illustrates the relative distance between the hauler R and its closest obstacle boundary with respect to time. In the scanning process, the hauler’s distance to the nearest obstacle boundary always exceeds r M = 0.5 distance units. (c) subplot presents the localization error R ( k ) R p ( k ) as k varies.
Figure 12. Scenario 1 with localization error under the random maneuver strategy. Regarding the scenario in Figure 11, (a) subplot plots the 2D coordinates of the hauler with respect to time, while (b) subplot illustrates the relative distance between the hauler R and its closest obstacle boundary with respect to time. In the scanning process, the hauler’s distance to the nearest obstacle boundary always exceeds r M = 0.5 distance units. (c) subplot presents the localization error R ( k ) R p ( k ) as k varies.
Sensors 24 06400 g012
Figure 13. Scenario 2 with localization error. One sets Δ = 100 in Algorithm 3. The path of the hauler R is depicted as a black circle. The hauler R occasionally homes to the base station by applying Δ = 100 in Algorithm 3. The homing maneuver of the hauler R is marked with green diamonds. Backwards and forwards maneuvering of the scouter R s (blue lines) is used to scan a ScanCircle entirely.
Figure 13. Scenario 2 with localization error. One sets Δ = 100 in Algorithm 3. The path of the hauler R is depicted as a black circle. The hauler R occasionally homes to the base station by applying Δ = 100 in Algorithm 3. The homing maneuver of the hauler R is marked with green diamonds. Backwards and forwards maneuvering of the scouter R s (blue lines) is used to scan a ScanCircle entirely.
Sensors 24 06400 g013
Figure 14. Scenario 2 with localization error. Regarding the scenario in Figure 13, (a) subplot illustrates the 2D coordinates of the hauler with respect to time, while (b) subplot illustrates the relative distance between the hauler R and its closest obstacle boundary with respect to time. (c) subplot presents the localization error R ( k ) R p ( k ) as k varies.
Figure 14. Scenario 2 with localization error. Regarding the scenario in Figure 13, (a) subplot illustrates the 2D coordinates of the hauler with respect to time, while (b) subplot illustrates the relative distance between the hauler R and its closest obstacle boundary with respect to time. (c) subplot presents the localization error R ( k ) R p ( k ) as k varies.
Sensors 24 06400 g014
Figure 15. Scenario 2 with localization error under the random maneuver strategy. The path of the hauler R is marked as a black circle. Backwards and forwards maneuvering of the scouter R s (blue lines) is used to scan a ScanCircle entirely. See that the bounded obstacle-rich workspace can’t be fully scanned by a random maneuver.
Figure 15. Scenario 2 with localization error under the random maneuver strategy. The path of the hauler R is marked as a black circle. Backwards and forwards maneuvering of the scouter R s (blue lines) is used to scan a ScanCircle entirely. See that the bounded obstacle-rich workspace can’t be fully scanned by a random maneuver.
Sensors 24 06400 g015
Figure 16. Scenario 2 with localization error under the random maneuver strategy. Regarding the scenario in Figure 15, (a) subplot plots the 2D coordinates of the hauler with respect to time, while (b) subplot illustrates the relative distance between the hauler R and its closest obstacle boundary with respect to time. (c) subplot presents the localization error R ( k ) R p ( k ) as k varies.
Figure 16. Scenario 2 with localization error under the random maneuver strategy. Regarding the scenario in Figure 15, (a) subplot plots the 2D coordinates of the hauler with respect to time, while (b) subplot illustrates the relative distance between the hauler R and its closest obstacle boundary with respect to time. (c) subplot presents the localization error R ( k ) R p ( k ) as k varies.
Sensors 24 06400 g016
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, J. Autonomous Lunar Rover Localization while Fully Scanning a Bounded Obstacle-Rich Workspace. Sensors 2024, 24, 6400. https://doi.org/10.3390/s24196400

AMA Style

Kim J. Autonomous Lunar Rover Localization while Fully Scanning a Bounded Obstacle-Rich Workspace. Sensors. 2024; 24(19):6400. https://doi.org/10.3390/s24196400

Chicago/Turabian Style

Kim, Jonghoek. 2024. "Autonomous Lunar Rover Localization while Fully Scanning a Bounded Obstacle-Rich Workspace" Sensors 24, no. 19: 6400. https://doi.org/10.3390/s24196400

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop