Next Article in Journal
A Carbon-Nanotube Cold-Cathode Reflex Klystron Oscillator: Fabrication @ X-Band and Returning Electron Beam Realization
Previous Article in Journal
Fault Diagnosis Method of Six-Phase Permanent Magnet Synchronous Motor Based on Vector Space Decoupling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Drone-Aided Path Planning for Unmanned Ground Vehicle Rapid Traversing Obstacle Area

1
Department of Computer Science and Information Engineering, National University of Kaohsiung, Kaohsiung 811, Taiwan
2
Department of Fragrance and Cosmetic Science, Kaohsiung Medical University, Kaohsiung 807, Taiwan
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(8), 1228; https://doi.org/10.3390/electronics11081228
Submission received: 26 March 2022 / Revised: 10 April 2022 / Accepted: 11 April 2022 / Published: 13 April 2022
(This article belongs to the Special Issue Intelligent Edge-Cloud Collaboration for Internet of Things)

Abstract

:
Even with visual contact equipment such as cameras, an unmanned ground vehicle (UGV) alone usually takes a lot of time to navigate an unfamiliar area with obstacles. Therefore, this study proposes a fast drone-aided path planning approach to help UGVs traverse an unfamiliar area with obstacles. In this scenario called UAV/UGV mobile collaboration (abbreviated UAGVMC), a UGV initially invokes an unmanned aerial vehicle (UAV) at the scene to take a ground image and send it back to the cloud to proceed with object detection, image recognition, and path planning (abbreviated odirpp). The cloud then sends the UGV a well-planned path map to help traverse an unfamiliar area. This approach uses the one-stage object detection and image recognition algorithm YOLOv4-CSP to quickly and accurately identify obstacles and the New Bidirectional A* (NBA*) algorithm to plan an optimal route avoiding ground objects. Experiments show that the execution time of path planning for each scene is less than 10 s on average. It does not affect the image quality of the path map. It ensures that the user can correctly interpret the path map and remotely drive the UGV rapidly, passing through that unfamiliar area with obstacles. As a result, the selected model can outperform the other alternatives significantly by average performance ratio up to 3.87 times on average.

1. Introduction

The Internet of Things, system-on-a-chip, dynamic random access memory, flash memory, and wafer process have made significant progress in recent years. Likewise, wireless networks have also developed Wi-Fi 6 and 5G. Traditional industries combined with emerging high-tech have greatly improved production efficiency, speed, automation, etc. Industries even entered the stage of intelligent production, and they can identify errors and issue warnings in advance or eliminate abnormal conditions by themselves. Advances in these industries have also led to unmanned vehicles such as unmanned aerial and ground vehicles (UAV/UGV). The open-source control chip [1], open-source mission planning [2], and dedicated transmission protocol [3] enable UAVs to execute the planned flight path and load the robot operating system (ROS) [4] into the development board for more complex manipulation and control. UAV is no longer mainly based on fuel-engine remote control fixed-wing but has integrated a multi-axis gyroscope, barometer, and satellite positioning control chip to develop a multicopter with a stable attitude. At present, applications can even control the cluster of thousands of UAVs through Intel technology [5]. These advances have also allowed UAVs to expand further. In addition to the multi-cluster control of UAVs, the batteries, motors, self-stabilizing gimbals, servo motors, cameras, and racks have all made significant advancements in technology, allowing UAVs to carry corresponding payloads as per different tasks [6]. These advanced technologies promote the expansion of the use of unmanned vehicles. In contrast, unmanned ground vehicles (UGVs) equipped with the same control system can also obtain greater flexibility in executing tasks.
The progress of the semiconductor manufacturing process has dramatically improved the computing power of chips, especially the visual processing computing unit [7] for single-purpose professional use. Advanced equipment allows deep learning or machine learning to be used widely in various industries [8], rapid object detection, and image recognition development. It improves the speed of object detection and maintains or even improves image recognition accuracy, which significantly impacts the operational detail of self-driving cars and the visual processing capabilities of unmanned vehicles. A UAV in the air assists in capturing images in a ground scene with spreading obstacles and then sends that image back to the back-end server for path planning through powerful visual computing and processing. In such a way, the server results in a map with a planned path and will promptly send it to the UGV on the ground. According to the guideline of this map, a UGV can traverse the scene smoothly without any collision with obstacles. It is not like a sweeping robot that slowly explores any route to avoid hitting obstacles and builds a map of the safety movement [9]. In short, the mobile collaborative operation of unmanned vehicles can significantly improve the time consumed by helping UGV traverse an unfamiliar area with obstacles very rapidly.

2. Related Work

The object detection methods have successfully developed their specific applications for years. In addition to high-end computing platforms, they can also operate on systems with low computing resources. Tran et al. studied an optimization solution to the hyperparameter tuning of an adaptive learning system [10] to improve object recognition accuracy. According to collected data from movement with the advanced driver assistance system (ADAS) to evaluate the previous convolutional neural network (CNN) models, the proposed method is to develop a framework for searching a set of learning hyperparameters. The trained model can be more intelligent than the previous model and show more diversified recognition results. Updating the model is repeated continuously throughout the ADAS life cycle. Mao et al. proposed a pipelined object detection method applied to embedded platforms [11]. They comprehensively analyzed the existing object detection methods and chose a fast region convolutional neural network (Fast R-CNN) as a possible solution. Furthermore, they modified the Fast R-CNN object detection method to balance speed and accuracy when operating on an embedded platform. This approach applied a multi-layer pipeline parallel operation method to an embedded platform with CPU and GPU, which can completely use the limited computing resources.
Both unmanned aerial vehicles (UAV) and unmanned ground vehicles (UGV) have recently developed rapidly, extending more application directions. Li et al. used UAVs to retain a ground image in the air and then constructed a ground map through image denoising, correction, and obstacle identification. They proposed a hybrid path planning algorithm [12] to optimize the planned path. They used a genetic algorithm to search the overall path and continuously used partial rolling to optimize the genetic algorithm’s results. Qin et al. proposed a new integrated vehicle system that uses UAV and UGV to work together [13] for autonomous exploration, mapping, and navigation in a 3D unknown environment where unmanned vehicles cannot use GPS. Experiments have proved its ability to realize heterogeneous UAV and UGV collaborative exploration and environmental structure reconstruction through active simultaneous localization and mapping (SLAM) and provided the optimal perception for navigation tasks. An et al. applied a self-organized aerial and ground group detection to achieve re-entry aircraft landing detection [14], for example, a search and rescue aircraft. The detection group consisted of multiple UAVs and UGVs. UAVs can rapidly detect objects and achieve high mobility, while UGVs can do more comprehensive detection due to the wide variety of equipment carried. These individuals make control decisions independently from others based on the self-organization strategies. After analyzing the overall requirements of group detection, the self-organization strategy to control group detection is appropriate and expandable.
To sum up, all UAV/UGV collaborations mentioned above will take time to complete the mission. It is necessary to develop a novel approach to resolve the time-consuming problem in ground-air vehicles collaboration. In particular, a rapid UAV-assisted path planning approach to help UGVs pass an unfamiliar area with obstacles is faster than the one without UAV-assisted path planning.

3. Method

3.1. Unmanned Aerial/Ground Vehicle Cooperative Operation

This part will introduce unmanned aerial/ground vehicle cooperative operation, including system architecture, unmanned ground vehicle (UGV), and unmanned aerial vehicle (UAV).

3.1.1. System Architecture

This system composes an integrated control interface through UAV, UGV, web server, cloud database, and Android apps [15]. According to the applications of the Internet of Things, unmanned aerial/ground vehicles synchronize the mobile phone, and the app in the mobile phone directly displays the streaming images taken by the camera or the thermal images captured by the infrared thermography on the screen through the wireless network, as shown by arrows 3 and 5 in Figure 1. This system uses the open-source software PhpMyadmin to build cloud computing and then uses Anaconda to build object detection, image recognition, and path planning services. The user directly controls the UAV/UGV through the app on the mobile phone. The embedded platform on the vehicle will continuously read the data from the cloud database and perform corresponding actions, as shown by arrow 1 in Figure 1. For the vehicle, the app in the mobile phone will control it to upload the videos or images captured by a camera to the cloud for storage in real-time, without occupying the storage space in the embedded platform. The vehicle’s monitoring system has four gas sensors, which can detect up to 12 gases such as CO, CO2, PM2.5, liquefied petroleum gas, etc. The detected gas information will be sent back to the database for storage in real-time, as shown by arrows 2 and 4 in Figure 1. Therefore, the user can read the air quality situation around the vehicle in real-time through the app on the mobile phone. As for the mobile collaboration between the UGV and UAV, the UAV first captures the ground image and sends it back to the server. Next, the server recognizes obstacles in the captured image and then performs path planning in the same image to turn out a planned-path map. Finally, the server transmits this map to the app on the mobile phone, and according to the planned path on this map, the user using the app on the mobile phone controls the UGV remotely, passing the scene smoothly, as shown by arrows 6 and 7 in Figure 1.

3.1.2. Unmanned Ground Vehicle (UGV)

Several modules build an unmanned ground vehicle, as listed in Table 1. Figure 2 shows an unmanned ground vehicle with a multipurpose monitoring system (iMonitor) mounted to the UGV for surveillance [16]. Three lenses are on the front panel of the iMonitor; the one on the left side is a thermal camera. After quantifying the captured data from the lens, a thermal camera can obtain the surface temperature distribution around the object in the field of lens view and generate a thermal image. The lens in the middle charges the video streaming and face recognition. Once it has detected a face successfully, the app in the mobile phone will automatically mark it with a red box and synchronize the video streaming through the wireless network. The last one on the far right is responsible for recording videos and taking pictures. Due to the performance limitation of Raspberry Pi, UGV realizes the functions of recording videos and taking pictures by installing another camera. The iMonitor has also installed four gas sensors on the side panel of the box. Table 2 lists the modules used in the iMonitor. The app in the mobile phone can display the gas information for the user in real-time, which visualizes the air quality of the surrounding environment around the UGV.

3.1.3. Unmanned Aerial Vehicle (UAV)

Figure 3 shows a self-assembled quadcopter ZD550 equipped with a multipurpose monitoring system (iMonitor), the iMonitor is shown in Figure 4. The several modules that build an unmanned aerial vehicle (UAV), are listed in Table 3. The purpose of a UAV equipped with an iMonitor is first to collect environmental hazardous gas information during the flight, especially in high-risk and ground-obstructed environments such as disaster sites, chemical plants, and so on [17]. After sensing the hazardous gas information, the iMonitor transmits it to the cloud server for storage immediately, and users can browse it on mobile phones in real-time. Next, the iMonitor can carry out surveillance from the in-flight camera and send the real-time captured images back to the in-cloud server. The recognition system uses these images to recognize faces, whether a human, a logged-in face, or a stranger with intruding intention. Finally, the iMonitor uses a thermal sensor lens to detect abnormal heat sources during UAV patrolling in sensitive areas. For example, it can monitor a fevered subject with a higher body temperature on the spot.

3.2. Android Client

The user uses the Android client app to perform operations such as streaming, shooting, setting, execution, and reading required for path planning. In the UAV operation interface, there will be a display streaming button (face indicated CAM) and a path planning button (face indicated PATH), as shown in Figure 5a. After pressing the CAM button, a streaming screen and a shot button (face indicated SHOT) will appear, as shown in Figure 5b. After pressing the SHOT button, the app takes the current image and uploads it to the server for storage. After returning to the initial interface, the user can press the PATH button to enter the path planning interface. At this time, the app receives the image taken by the UAV through the server, and three buttons (face indicated START, GOAL, and PATHPLAN) will appear for path planning, as shown in Figure 6. After pressing the Start or Goal button, the start or goal icon will appear and locate on the image. Users can use the vertical or horizontal slide bar to adjust the position of the start or goal icon on the image, as shown in Figure 7a,b. After that, the user clicks the start or goal icon, and the app will upload the coordinate of that icon to the database on the server. Once the starting point and goal point are available, the user can press the PATHPLAN button to detect the obstacles and execute the path planning in the image. The app in the mobile phone will change the PATH button to the NEW button when the path planning is performed in the server to indicate that users can browse the new result of path planning, as shown in Figure 8a. After pressing NEW, the app will load the path planning result and show the ADD button at once. Then the app also changes the NEW button back to the PATH button, as shown in Figure 8b. The user can thus operate the UGV to reach the location of the first object in the path, as shown in Figure 8c. After pressing ADD, the pinpoint (face indicated P) icon pops up, as shown in Figure 8d. The user uses the vertical or horizontal slide bar to move the pinpoint icon into the path close to the first object, and the app will indicate #1, as shown in Figure 8e. In Figure 8e, the app marks pinpoint #1 on the path when the user has pressed pinpoint #1. The user continues to operate the UGV to reach the second object location in the path, as shown in Figure 8f. Similarly, the user presses ADD to pop up a pinpoint icon and moves it into the path closed to the second object, as shown in Figure 8g. In Figure 8h, the app will indicate it as #2. By marking the pinpoints, the user can understand that the UGV has moved from the starting point to a particular position or goal point in the path while controlling the UGV to move.

3.3. Object Detection and Image Recognition

The following will discuss how to train a deep learning model applied to object detection and image recognition.

3.3.1. Training Environment

The user has established the model of the object detection and image recognition in a computing node that works for path planning and its hardware specifications, as listed in Table 4. The path planning node regularly checks the database to determine whether there is a need to trigger the execution of object detection, image recognition, and path planning. With the operating system Windows 10, the path planning node has Anaconda installed to build the Python programming environment, the Pytorch training framework for the object detection model, and the image annotation tool for creating a user-defined data set. Here, the user uses Visual C++ to build Darknet to calculate the anchors’ parameters needed for user-defined models.

3.3.2. Training Data Set

A user-defined model collects the training data from free internet material and self-shooting images. Adjusting the same object through different shooting angles, distances, sizes, directions, etc., helps improve the model’s accuracy after training. The user collects more than 500 images in the data set and uses the Labeling tool to label all images with the recognized objects, as shown in Figure 9. Each labeled image will generate a document marked with the coordinates of the objects. Here, the user adopts cardboards only as identification objects to simplify the design of the experiments. In other words, the identification category is one object only.

3.3.3. Training Model

Training a user-defined model divides data into the training data set and test data set by 80% and 20% of data and generates two corresponding path files. After generating the path file, the system creates a YAML file and names the file for training the model. The content of the YAML file is the access location of path files of the training data set and the test data set, and the number of categories of the identified objects. The content of the names file is the category name. In the training data set, the clustering analysis provided by Darknet generates three sets of anchors with different scales enclosed in red squared boxes, as shown in Figure 10. To train a user-defined model, the user must modify the parameter configuration (.CFG) file. The parameters that need to be modified are width, height, filters, and anchors. The object detection and image recognition models use 608 × 608 pixels resolution and one recognition category. Therefore, the user sets the width and height of 608 pixels, assigns the number of filters 18, and fills in the anchors computed from Darknet.
Through 300 epochs in the training phase, the program stops the training and outputs the record of the training process, as shown in Figure 11. After completing the model training, the program uses the test data set to test the accuracy of the trained model, as shown in Figure 12. The test result shows that the mean average precision (mAP) is 0.931 indicated in a red squared box. The closer the mAP is to 1, the higher the recognition accuracy.

3.4. Object Detection and Path Planning

The below will describe the respective algorithms of object detection and path planning used in the application of an unmanned ground vehicle rapid traversing obstacle area.

3.4.1. Object Detection Using SCALE-YOLOv4

Based on YOLOv3, YOLOv4, [18] combines YOLOv3 backbone Darknet53 [19] with Cross Stage Partial Network (CSPNet) [20] and adopts the Mish activation function to improve it to CSPDarknet53. Moreover, it joins the Path Aggregation Network (PAN) [21], strengthening instance segmentation into the neck. YOLOv4 improves YOLOv3’s average precision (AP) and frames per second (FPS) by 10% and 12% [18]. Based on YOLOv4, Scale-YOLOv4 [22] proposes a neural network model scaling method to turn out a modifier one. Scale-YOLOv4 not only modifies the depth, width, and resolution of the input image in the neural network, but it can also modify the structure of the neural network. It can be scaled down to a small neural network or expanded to an extensive one to adapt to different equipment levels while maintaining the best performance and accuracy. Scale-YOLOv4 includes YOLOv4-Tiny, YOLOv4-CSP, and YOLOv4-Large.
YOLOv4-CSP is based on YOLOv4 and changed the first CSP stage of the backbone’s CSPDarknet53 to Darknet’s Residual Block to reduce the computation amount of the backbone’s floating-point operations (FLOPs). Meanwhile, its neck combined with the CSPNet, and Mish activation function reduces the amount of FLOPs computation in the neck. Spatial Pyramid Pooling (SPP) [23] also adjusts its position accordingly in the neck structure.
Wang, Bochkovskiy, and Liao [22] use the COCO minval dataset to compare the number of parameters, floating-point operations (FLOPs), frames per second (FPS), and average precision (AP) of different models. The backbone explores the comparison between Darknet53 (D53) and improved CSPDarknet53 (CD53s). They divided the neck into FPNSPP (FPN combined with SPP), CFPNSPP (FPNSPP combined with CSP), PANSPP (PAN combined with SPP), and CPANSPP (PANSPP combined with CSP). The activation function uses LeakyReLU and Mish alternatively. It selects the model CD53s-CPANSPP-Mish with the highest AP and denotes YOLOv4-CSP.

3.4.2. Path Searching Algorithm

A* (A Star) [24] is a widely used algorithm on plane graphics, with multiple nodes on the path and finding the lowest passing cost. This algorithm combines the advantages of the Dijkstra [25] and best-first search (BFS) [26] algorithms. While conducting a heuristic search to improve the algorithm’s efficiency, it can find an optimal path. The difference between these two algorithms is that Dijkstra needs to traverse the nodes to find the shortest path, while BFS is fast in execution but unable to find the shortest path. The algorithm switches the search mode dynamically between Dijkstra and BFS through the heuristic function in Equation (1), where n represents any node in the plane graph, g ( n ) stands for the actual distance from the initial node to any node n, and h ( n ) is the estimated distance from any node n to the target node.
  f ( n ) = g ( n ) + h ( n )
The A* algorithm divides the movement mode into four-direction or eight-direction [26]. There are two coordinate points P 1 ( x 1 , y 1 ) and P 2 ( x 2 , y 2 ) on the two-dimensional grid map. The four-way movement uses the Manhattan distance to calculate the distance between two points in Equation (2). Eight-direction movement uses diagonal distance to calculate the distance between two points in Equation (3).
D M a n h a t t a n ( P 1 , P 2 ) = | x 1 x 2 | + | y 1 y 2 |
D D i a g o n a l ( P 1 , P 2 ) = 1.414 × min ( Δ x , Δ y ) + | Δ x Δ y |         w h e r e   Δ x = | x 1 x 2 |   a n d   Δ y = | y 1 y 2 |
The modified A* algorithm becomes the Bidirectional A* (BA*) algorithm [27]. BA* can initiate a two-way search from the starting and goal points and end after the two paths cross. The general BA* primarily uses the balanced heuristic. Nevertheless, this study uses another modified version of the BA* called New Bidirectional A* (NBA*) [27], which is no longer limited to the balanced heuristic. Its search speed is faster than the general BA*.

3.4.3. Path Planning Flow

This study used the NBA* search algorithm as a path planning method. This method mainly operates on a flat grid graph, as shown in Figure 13. First, the user converts the input binarized PNG image into an occupied grid map (OGM). In the OGM, the value 1 represents the current node occupied, and 0 is free to go. Converting PNG images to OGM can meet the conditions required for the NBA* search algorithm in path planning. Before path planning, the program will read the coordinates of the starting and goal points from the cloud database. The user will adjust the coordinates of starting and goal points appropriately according to the different resolutions of the input image. In other words, if the algorithm scales the input image resolution, it will also scale the coordinates to avoid causing errors in the coordinates of the starting and goal points. Once the user has completed the OGM and the coordinates of the starting point and goal point, the user can execute the NBA* search algorithm for path planning. In path planning, the way to search using the NBA* algorithm is eight-way movement, which is more smooth and complete when compared with four-way movement. After obtaining the planned path, the user can use matplotlib to draw the path data into an image.

3.5. Executive Approach

The next section will present the executive flow of the proposed algorithm to help an unmanned ground vehicle rapidly traverse obstacle areas.

3.5.1. UAV/UGV Mobile Collaboration in Object Detection and Path Planning (UAGVMC-Odirpp) Algorithm

This study proposes an approach to path planning making from UAV in the air as per the ground UGV’s request. Once a UAV has uploaded the image to the server, the NBA* algorithm will execute the image’s path planning operation, as shown in Algorithm 1. The algorithm adjusts the resolution of the input image and then reduces the color saturation. Object detection and image recognition identify the obstacles and place the detection frame on these. After the image binarization, the image’s white represents the detection frame, and the rest are all black. The algorithm will change the aspect ratio of the original binary image from 4:3 to 1:1 to perform path planning operations. After path planning, the server obtains the map with a planned path. Then, the algorithm will adjust the image’s resolution appropriately to speed up the subsequent images processing and ensure the quality of the path planning image. At this stage, the algorithm adjusts the image’s aspect ratio from 1:1 to 4:3, making the background transparent. The last stage adjusts the transparent image back to the resolution of the original input image and then combines them to obtain the final result.
Algorithm 1: UAGVMC-odirpp
Input: img1 (obtain an image from the server).
Output: img_final (put a planned-path image to the server).
 1:
begin
 2:
  img2 ← Down img1’s resolution according to phase 1 parameter
 3:
  img3 ← Reduce img2’s saturation
 4:
  img4 ← Input img3 to object detection model and circle each detected obstacle
 5:
  img5 ← Binarize img4
 6:
  img6 ← Square borders to img5 and get its shape ratio to be 1:1
 7:
  img7 ← Input img6 to path planning task and output an image
 8:
  img8 ← Up img7’s resolution according to phase 2 parameter
 9:
  img9 ← Crop img8 and get its shape ratio to be 4:3
 10:
   img10 ← Remove img9′s background
 11:
   img11 ← Restore img10 to be the same resolution as img1
 12:
   img_final ← Merge img1 and img11
 13:
end

3.5.2. Execution Flow

The UAGVMC-odirpp algorithm implements image processing, object detection, and path planning, as shown in Figure 14. The execution process frequently checks whether there is a newly submitted job in the cloud database. If so, the system downloads images from the server and gets this job started. The resolution adjustment in the image process will affect the execution speed of the job and the quality and reliability of the output results. The UAGVMC-odirpp algorithm is applied to optimize the execution performance, thus improving the output quality. Based on the UAGVMC-odirpp algorithm, Figure 15a is an input image, and Figure 15b is the first image resolution adjustment. The adjustment of the image resolution here is to speed up the execution in the subsequent image processing. Figure 15c reduces the image saturation to prevent the image’s color out of the subsequent binarization processing from causing errors in the pattern judgment. Figure 15d is the result of identifying obstacles through object detection. Figure 15e separates the identification frame and the background through the HSV color mask to obtain a binary image. Figure 15f adds the upper and lower borders for changing the image’s aspect ratio from 4:3 to 1:1 to meet the conditions from an input image to the path planning. Figure 15g is the path planning result drawn by the Plot function where the red dot represents the starting point, the green dot stands for the goal point, and the yellow line is the planned path. Whiting the background in the color white is used as a pre-processing for background transparency. Figure 15h adjusts the image resolution while outputting the path planning results. Figure 15i changes the aspect ratio from 1:1 to 4:3 after the image cropping. Figure 15j makes the image transparent within the white background. Figure 15k restores the resolution of the current image to be the same as the original input image. Figure 15l combines the image of a planned path with the original input image to obtain the final output result and then uploads it to the server to let users browse a planned path on the Android client. The object detection model displays the obstacle’s position in the binarized image map through the bounding box, as shown in Figure 15e,f. The portion of that image with the distance between any two obstacles that exceeds 1.3 times the width of the vehicle is used as the image background to plan the candidate paths. Therefore, the planned path is wide enough to let UGV pass. The path planning algorithm in this study mainly draws the shortest-straight-line path between the starting point and the goal point, as shown in Figure 15g,l, and the user then can remotely control UGV to move forward through the obstacle area according to the path planning result.

3.5.3. Predetermined Modes

According to the UAGVMC-odirpp algorithm, two stages, Phase 1 and Phase 2, in the execution flow, can adjust parameters. Combining two parameters will affect the overall performance in the execution flow and the quality of the final output image. The Phase 1 parameter affects the operations in the first five stages: reduce saturation, circle object, binarized, square border, and draw a path. In addition, it also affects the accuracy of object detection and the pros and cons of path planning. On the other hand, the Phase 2 parameter affects the operations in the last four stages: crop, remove background, restore resolution, and merge in the execution flow. Two parameters affect image processing performance, and they also affect the quality of the final output image. Therefore, this study set the predetermined nine modes that represent different parameter combinations, as listed in Table 5. Then, the user tests these modes and selects which modes can work typically and do not affect the output image quality.
Figure 16 shows the object detection results according to nine modes listed in Table 5. From Mode 1 to Mode 8, all can exactly recognize all obstacles, while it does not work in Mode 9. Thus, users can only reduce the Phase 1 parameter to the image resolution of 128 × 96. Since Mode 9 cannot identify all of the obstacles, from Mode 1 to Mode 8, all can perform the path planning overall and evaluate the quality of the final output image appropriately. After the user has checked the final output image quality in Mode 6, Mode 7, and Mode 8 with the identical Phase 1 parameter, the final output image quality in Mode 6 and Mode 7 is clear, but it is blurred in Mode 8, as shown in Figure 17. Therefore, the user can only reduce the Phase 2 parameter to the image resolution of 512 × 384. According to the test results mentioned above, this study chose Mode 1 to Mode 7 for the following experiments and performance evaluation.

4. Experiment Results and Discussion

4.1. Executive Approach

This study explores the practical application of mobile collaborative operations between unmanned vehicles, which optimizes object detection, image recognition, and path planning. It looks forward to achieving the best execution performance of the cooperative operation. In Table 6, the user has set the different parameter combinations of Phase 1 and Phase 2 as seven testing modes in the experimenters. Their respective time complexity of computing is also shown in Table 6, where b represents the branching routes and d stands for the shortest distance from entry to exit. The resolution of all input images in the experiments is 1280 × 960 pixels. The Phase 1 parameter represents the adjusted resolution of the input image, and Phase 2 the output resolution of a drawn path image. Mode 1 uses OpenCV to perform image processing, and instead, the others from Mode 2 to Mode 7 use Pillow. In particular, from Mode 1 to Mode 7, all use OpenCV for fast image binarization. This study has presented nine patterns in the experiments and pre-assigned a pair of starting and goal points to each pattern as a corresponding experiment. Each experiment will carry out the drawn paths according to 16 scene layouts. In scene layouts, carton boxes represent obstacles. Each scene executes the planned route based on seven modes and records the execution time of each mode. Each experiment records its time to enter the start point and reach the goal.
Each experiment draws its path planning results of 16 scene layouts and plots the execution time of each mode, based on Equation (4), in each scene layout in a figure. In Equation (4), t k j i represents the execution time of each mode in an experiment with a designated scene, where the subscript i is the number of scenes, j the number of modes, and k the number of experiments. As a result, t ¯ k j stands for the average execution time of each mode in the experiments in Equation (4). In Equation (5), P r k j represents the performance ratio of each operative mode to Mode 1 in the experiments. Finally, we obtain P r j that stands for the average performance ratio of each Mode to Mode 1 in the experiments, as shown in Equation (6).
t ¯ k j = 1 l   i = 1 l t k j i ,   j = 1 , 2 , , m ,   k = 1 , , 2 , , n
P r k j = 1 t ¯ k 1 1 t ¯ k j ,     j = 1 , 2 , , m ,   k = 1 , , 2 , , n
P r j = 1 n k = 1 n P r k j ,   j = 1 , 2 , , m
This study uses a scenario and seven modes to compare the execution time of the three path search algorithms, as shown in Table 7. In Table 7, the execution time range is from the beginning of the path searching to the end. As a result, we chose the best-resulting NBA* algorithm to apply path planning in the following experiments.

4.2. Data Set

The user has designed the nine sets of patterns indicating the starting point (denoted red dots) and goal point (denoted green dots) for nine experiments, as shown in Figure 18. Each pattern set will run in conjunction with 16 sets of scene layouts. In Figure 19, the yellow squared boxes in the figures represent the cartons (obstacles) they located.

4.3. Experiment Results

First, the user sets the designated coordinates of each experiment’s starting point and goal point of the scene layout as listed in Table 8. Each starting point and goal point pair named the experiment number from I to IX. Next, the user starts the experiments to implement the path planning for each scene layout. Finally, the experiments resulted in the planned paths, as shown in Figure 20, Figure 21, Figure 22, Figure 23, Figure 24, Figure 25, Figure 26, Figure 27 and Figure 28. The only optimized path from the designated starting point to the goal point can display on each scene layout in every experiment.
For estimating the performance evaluation of path planning in various operative modes, the experiments recorded the execution time of each operative mode for running in all scene layouts, as shown in Figure 29. After that, the user has the average execution time of each operative mode running in all scene layouts in every experiment, as listed in Table 9. As a result, based on the performance ratio, the performance comparison of path planning of seven modes in nine experiments is indicated in Table 9.

4.4. Performance Evaluation

Mode 2 uses Pillow instead of OpenCV to perform image processing, and its average performance ratio (i.e., the execution speed) is 1.28 times compared with Mode 1. Therefore, the user sets Mode 2~Mode 7 to use Pillow to perform the image processing in the UAGVMC-odirpp algorithm. According to the nine sets of starting and goal points in the experiments from session 4.3.1 to session 4.3.9, the user can obtain the average performance ratio of each operative mode compared with Mode 1, as shown in Table 10. Mode 3 reduces the Phase 1 parameter to 640 × 480 pixels, and its average performance ratio is 2.49 times that compared with Mode 1. Likewise, Phase1 parameters of Mode 4, Mode 5, Mode 6, and Mode 7 are 320 × 240, 256 × 192, 128 × 96, and 128 × 96 pixels, respectively. Accordingly, their average performance ratio is 3.25, 3.39, 3.56, and 3.87 times compared with Mode 1.

4.5. Discussion

Compared with the related literature, the proposed approach can integrate a variety of vehicles more efficiently. The proposed approach does not construct a map in advance but only uses live images to identify obstacles at the local scene and plan the current walking path immediately. The method has adopted the Scaled-YOLOv4 object detection model to detect obstacles. According to the experimental results, the object detection model has achieved better performance and is more suitable to apply to these cases. The NBA* path planning algorithm is an improvement of the two-way version BA* algorithm derived from the classic and reliable A* algorithm. Its characteristics are also suitable for the multi-obstacle layout requirements of this example.
Regarding limitations, because obstacle detection uses bounding boxes to mark detected obstacles, space utilization is poor compared with edge detection or instance segmentation algorithms. Path planning is also affected by the size of bounding boxes. It is not so intuitive and smooth. Nevertheless, it can achieve a more rapid performance than the others due to the point of view of time. In terms of the path planning algorithm, this study does not consider the volume and wheelbase of the ground vehicle, so it may happen that the ground vehicle with a large volume and wheelbase cannot walk successfully according to the planned path.
Moreover, the planned path may encounter particular terrain and features, for instance, the bridges (the UGV could go beneath) or a path with holes and uniformities. How to deal with these problems? Finally, it is possible to consider changing the user’s remote control of the ground vehicle to self-driving so that the ground vehicle can travel more smoothly.

5. Conclusions

Generally speaking, a single UGV with camera equipment would spend a long time finding a way out of an unfamiliar area with obstacles. Therefore, this study proposes a fast drone-aided image-sensing path planning approach to supply the UGV with an image map of the well-planned path for traversing that unfamiliar area rapidly. The proposed approach, the UAGVMC-odirpp algorithm, has employed one-stage rapid object detection, and an image recognition method to detect obstacles in a live image and quickly makes a planned path based on the object detection results. As a result, according to the well-planned path map, the user can then remotely control UGV to move forward through the obstacle area. We have to devote ourselves to object detection and path planning improvements in future work. In terms of object detection, how to devise a new approach based on an instance segmentation algorithm with the same performance as Scaled-YOLOv4 to increase the space utilization of path planning. For path planning, we should consider that parameters such as the volume and wheelbase of the ground vehicle should register to the in-cloud database. Thus, we can plan a smoother path according to these parameters. Hopefully, object detection and path planning improvements can conquer the limitations mentioned in the discussion section. Finally, another future objective is to develop autonomous ground vehicles instead of remote-controlled UGVs.

Author Contributions

B.R.C. and J.-L.L. conceived and designed the experiments; H.-F.T. collected the experimental dataset, and H.-F.T. proofread the paper; B.R.C. wrote the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Technology, Taiwan, MOST 110-2622-E-390-001 and MOST 109-2622-E-390-002-CC3, and The APC was funded by the Ministry of Science and Technology, Taiwan.

Data Availability Statement

Figures of sixteen scene layouts are available as follows: https://drive.google.com/file/d/1-6Jj-Iy-RP8s_8i3J0jPckttpD5UdC1z/view?usp=sharing (accessed on 10 April 2022).

Acknowledgments

This paper is supported and granted by the Ministry of Science and Technology, Taiwan (MOST 110-2622-E-390-001 and MOST 109-2622-E-390-002-CC3).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Ardupilot. Autopilot Hardware Options. 2021. Available online: https://ardupilot.org/copter/docs/common-autopilots.html (accessed on 10 April 2022).
  2. Ardupilot. Mission Planning. 2020. Available online: https://ardupilot.org/copter/docs/common-mission-planning.html (accessed on 10 April 2022).
  3. Crespo, G.; de-Glez-Rivera, G.; Garrido, J.; Ponticelli, R. Setup of a communication and control systems of a quadrotor type Unmanned Aerial Vehicle. In Proceedings of the Design of Circuits and Integrated Systems, Madrid, Spain, 26–28 November 2014. [Google Scholar]
  4. ROS. About ROS 2021. Available online: https://www.ros.org/about-ros/ (accessed on 10 April 2022).
  5. Intel. The Olympic Games Tokyo 2020 Opening Ceremony. 2021. Available online: https://inteldronelightshows.com/performance/the-olympic-games-tokyo-%c2%b7-2020-5701/ (accessed on 10 April 2022).
  6. Panagiotou, P.; Kaparos, P.; Salpingidou, C.; Yakinthos, K. Aerodynamic design of a MALE UAV. Aerosp. Sci. Technol. 2016, 50, 127–138. [Google Scholar] [CrossRef]
  7. Intel. Intel® Movidius™ Vision Processing Units (VPUs). 2021. Available online: https://www.intel.com/content/www/us/en/products/details/processors/movidius-vpu.html (accessed on 10 April 2022).
  8. Huang, H.; Ding, S.; Zhao, L.; Huang, H.; Chen, L.; Gao, H.; Ahemd, S.H. Real-Time Fault Detection for IIoT Facilities Using GBRBM-Based D.N.N. IEEE Internet Things J. 2019, 7, 5713–5722. [Google Scholar] [CrossRef]
  9. Chan, S.; Wu, P.; Fu, L. Robust 2D Indoor Localization Through Laser SLAM and Visual SLAM Fusion. In Proceedings of the 2018 IEEE International Conference on Systems. Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018. [Google Scholar]
  10. Tran, D.-P.; Nguyen, G.-N.; Hoang, V.-D. Hyperparameter Optimization for Improving Recognition Efficiency of an Adaptive Learning System. IEEE Acces. 2020, 8, 160569–160580. [Google Scholar] [CrossRef]
  11. Mao, H.; Yao, S.; Tang, T.; Li, B.; Yao, J.; Wang, Y. Towards Real-Time Object Detection on Embedded Systems. IEEE Trans. Emerg. Top. Comput. 2016, 6, 417–431. [Google Scholar] [CrossRef]
  12. Li, J.; Deng, G.; Luo, C.; Lin, Q.; Yan, Q.; Ming, Z. A Hybrid Path Planning Method in Unmanned Air/Ground Vehicle (UAV/UGV) Cooperative Systems. IEEE Trans. Veh. Technol. 2016, 65, 9585–9596. [Google Scholar] [CrossRef]
  13. Qin, H.; Meng, Z.; Meng, W.; Chen, X.; Sun, H.; Lin, F.; Ang, M.H. Autonomous Exploration and Mapping System Using Heterogeneous UAVs and UGVs in GPS-Denied Environments. IEEE Trans. Veh. Technol. 2019, 68, 1339–1350. [Google Scholar] [CrossRef]
  14. An, M.; Wang, Z.; Zhang, Y. Self-organizing strategy design and validation for integrated air-ground detection swarm. J. Syst. Eng. Electron. 2016, 27, 1018–1027. [Google Scholar] [CrossRef]
  15. Chang, B.R.; Tsai, H.F.; Lyu, J.L.; Huang, C.F. IoT-connected Group Deployment of Unmanned Vehicles with Sensing Units: iUAGV system. Sens. Mater. 2021, 33, 1485–1499. [Google Scholar] [CrossRef]
  16. Chang, B.R.; Tsai, H.-F.; Lin, Y.-C.; Yin, T.-K. Unmanned mobile multipurpose monitoring system—iMonitor. Sens. Mater. 2021, 33, 1457–1471. [Google Scholar] [CrossRef]
  17. Chang, B.R.; Tsai, H.-F.; Kuo, H.-C.; Huang, C.-F. Distributed sensing units deploying on group unmanned vehicles. Int. J. Distrib. Sens. N. 2021, 17, 1–19. [Google Scholar] [CrossRef]
  18. Bochkovskiy, A.; Wang, C.; Liao, H.-Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  19. Redmon, J. Darknet: Open Source Neural Networks in c. 2013–2016. Available online: http://pjreddie.com/darknet/ (accessed on 10 April 2022).
  20. Wang, C.; Mark Liao, H.; Wu, Y.; Chen, P.; Hsieh, J.; Yeh, I. CSPNet: A New Backbone that can Enhance Learning Capability of CNN. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
  21. Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path Aggregation Network for Instance Segmentation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  22. Wang, C.; Bochkovskiy, A.; Liao, H.-Y.M. Scaled-YOLOv4: Scaling Cross Stage Partial Network. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual Event, 19–25 June 2021. [Google Scholar]
  23. Huang, Z.; Wang, J. DC-SPP-YOLO: Dense Connection and Spatial Pyramid Pooling Based YOLO for Object Detection. Inf. Sci. 2020, 522, 241–258. [Google Scholar] [CrossRef] [Green Version]
  24. Zhang, A.; Li, C.; Bi, W. Rectangle expansion A* pathfinding for grid maps. Chin. J. Aeronaut. 2016, 29, 1385–1396. [Google Scholar] [CrossRef] [Green Version]
  25. Luo, M.; Hou, X.; Yang, J. Surface Optimal Path Planning Using an Extended Dijkstra Algorithm. IEEE Access. 2020, 8, 147827–147838. [Google Scholar] [CrossRef]
  26. Koziol, S.; Wunderlich, R.; Hasler, J.; Stilman, M. Single-Objective Path Planning for Autonomous Robots Using Reconfigurable Analog VLSI. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 1301–1314. [Google Scholar] [CrossRef]
  27. Pijls, W.; Post, H. A new bidirectional algorithm for shortest paths. Eur. J. Oper. Res. 2010, 207, 1140–1141. [Google Scholar] [CrossRef]
Figure 1. System architecture.
Figure 1. System architecture.
Electronics 11 01228 g001
Figure 2. Unmanned ground vehicle (UGV).
Figure 2. Unmanned ground vehicle (UGV).
Electronics 11 01228 g002
Figure 3. Unmanned aerial vehicle (UAV).
Figure 3. Unmanned aerial vehicle (UAV).
Electronics 11 01228 g003
Figure 4. A multipurpose monitoring system in a box—iMonitor.
Figure 4. A multipurpose monitoring system in a box—iMonitor.
Electronics 11 01228 g004
Figure 5. UAV interface: (a) Initial display of UAV interface; (b) Video streaming/photo shooting interface.
Figure 5. UAV interface: (a) Initial display of UAV interface; (b) Video streaming/photo shooting interface.
Electronics 11 01228 g005
Figure 6. Path planning Interface at UAV.
Figure 6. Path planning Interface at UAV.
Electronics 11 01228 g006
Figure 7. Path planning from UAV interface: (a) Setting starting point; (b) Setting goal point.
Figure 7. Path planning from UAV interface: (a) Setting starting point; (b) Setting goal point.
Electronics 11 01228 g007
Figure 8. Control of UGV: (a) UGV interface; (b) display a planned path; (c) object position #1; (d) pop up pin point #1; (e) place pin point #1; (f) object position #2; (g) pop up pin point #2; (h) place pin point #2.
Figure 8. Control of UGV: (a) UGV interface; (b) display a planned path; (c) object position #1; (d) pop up pin point #1; (e) place pin point #1; (f) object position #2; (g) pop up pin point #2; (h) place pin point #2.
Electronics 11 01228 g008
Figure 9. Labeling of obstacles.
Figure 9. Labeling of obstacles.
Electronics 11 01228 g009
Figure 10. Calculating anchors of YOLOv4-CSP model.
Figure 10. Calculating anchors of YOLOv4-CSP model.
Electronics 11 01228 g010
Figure 11. Log from the training phase.
Figure 11. Log from the training phase.
Electronics 11 01228 g011
Figure 12. Test of a trained model.
Figure 12. Test of a trained model.
Electronics 11 01228 g012
Figure 13. Path planning flow.
Figure 13. Path planning flow.
Electronics 11 01228 g013
Figure 14. Execution flow.
Figure 14. Execution flow.
Electronics 11 01228 g014
Figure 15. Intermediate output during path planning: (a) input an image; (b) down resolution path; (c) reduce saturation; (d) circle object; (e) binarized; (f) square border; (g) draw a path; (h) up resolution; (i) crop; (j) remove background; (k) restore resolution; (l) merge.
Figure 15. Intermediate output during path planning: (a) input an image; (b) down resolution path; (c) reduce saturation; (d) circle object; (e) binarized; (f) square border; (g) draw a path; (h) up resolution; (i) crop; (j) remove background; (k) restore resolution; (l) merge.
Electronics 11 01228 g015aElectronics 11 01228 g015b
Figure 16. Object detection results: (a) mode1; (b) mode2; (c) mode3; (d) mode4; (e) mode5; (f) mode6; (g) mode7; (h) mode8; (i) mode9.
Figure 16. Object detection results: (a) mode1; (b) mode2; (c) mode3; (d) mode4; (e) mode5; (f) mode6; (g) mode7; (h) mode8; (i) mode9.
Electronics 11 01228 g016
Figure 17. Final output: (a) mode6; (b) mode7; (c) mode8.
Figure 17. Final output: (a) mode6; (b) mode7; (c) mode8.
Electronics 11 01228 g017
Figure 18. Final output: (a) set #1; (b) set #2; (c) set #3; (d) set #4; (e) set #5; (f) set #6; (g) set #7; (h) set #8; (i) set #9.
Figure 18. Final output: (a) set #1; (b) set #2; (c) set #3; (d) set #4; (e) set #5; (f) set #6; (g) set #7; (h) set #8; (i) set #9.
Electronics 11 01228 g018aElectronics 11 01228 g018b
Figure 19. Sixteen scene layouts: (a) scene layout #1; (b) scene layout #2; (c) scene layout #3; (d) scene layout #4; (e) scene layout #5; (f) scene layout #6; (g) scene layout #7; (h) scene layout #8; (i) scene layout #9; (j) scene layout #10; (k) scene layout #11; (l) scene layout #12; (m) scene layout #13; (n) scene layout #14; (o) scene layout #15; (p) scene layout #16.
Figure 19. Sixteen scene layouts: (a) scene layout #1; (b) scene layout #2; (c) scene layout #3; (d) scene layout #4; (e) scene layout #5; (f) scene layout #6; (g) scene layout #7; (h) scene layout #8; (i) scene layout #9; (j) scene layout #10; (k) scene layout #11; (l) scene layout #12; (m) scene layout #13; (n) scene layout #14; (o) scene layout #15; (p) scene layout #16.
Electronics 11 01228 g019
Figure 20. Planned paths shown in Experiment I: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Figure 20. Planned paths shown in Experiment I: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Electronics 11 01228 g020
Figure 21. Planned paths shown in Experiment II: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Figure 21. Planned paths shown in Experiment II: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Electronics 11 01228 g021
Figure 22. Planned paths shown in Experiment III: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Figure 22. Planned paths shown in Experiment III: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Electronics 11 01228 g022aElectronics 11 01228 g022b
Figure 23. Planned paths shown in Experiment IV: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Figure 23. Planned paths shown in Experiment IV: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Electronics 11 01228 g023
Figure 24. Planned paths shown in Experiment V: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Figure 24. Planned paths shown in Experiment V: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Electronics 11 01228 g024aElectronics 11 01228 g024b
Figure 25. Planned paths shown in Experiment VI: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Figure 25. Planned paths shown in Experiment VI: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Electronics 11 01228 g025
Figure 26. Planned paths shown in Experiment VII: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Figure 26. Planned paths shown in Experiment VII: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Electronics 11 01228 g026
Figure 27. Planned paths shown in Experiment VIII: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Figure 27. Planned paths shown in Experiment VIII: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Electronics 11 01228 g027aElectronics 11 01228 g027b
Figure 28. Planned paths shown in Experiment IX: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Figure 28. Planned paths shown in Experiment IX: (a) result #1; (b) result #2; (c) result #3; (d) result #4; (e) result #5; (f) result #6; (g) result #7; (h) result t #8; (i) result #9; (j) result #10; (k) result #11; (l) result #12; (m) result #13; (n) result #14; (o) result #15; (p) result #16.
Electronics 11 01228 g028
Figure 29. Average execution time of each mode running in all scene layouts over the experiments.
Figure 29. Average execution time of each mode running in all scene layouts over the experiments.
Electronics 11 01228 g029
Table 1. UGV modules.
Table 1. UGV modules.
NumberModule
1Arduino UNO R3
2MPU6050 Gyroscope
3TB6612FNG Motor Driver Board
4GB37 Deceleration Motor
5ESP8266 WiFi Module
Table 2. iMonitor modules.
Table 2. iMonitor modules.
NumberModule
1Raspberry Pi 3B
2Raspberry Pi Zero
3Arduino UNO R3
4Intel Movidius MA2450 Vision Bonnet
5Raspberry Pi Camera V2.1 Camera
6MLX90640 Thermal Camera
7SHARP GP2Y1051AU0F PM2.5 Sensor
8DS-CO2-20 Sensor
9MQ-4 CH4 Sensor
10Grove Multichannel Gas Sensor
Table 3. UAV modules.
Table 3. UAV modules.
NumberModule
1Pixhawk 2.4.8 Flight Controller
2M8N GPS Module
3AT9S Pro 12 Channels Remote Controller
4R9DS 9/10 Channels Receiver Module
5433 MHz 100 mW Radio Telemetry Module
6VX30 5.8 GHz 800 mW Video Transmitter
75.8 GHz USB Video Receiver
8TAROT TL68A00 2-Axis Gimbal
9Sony IMX477R with 6 mm Lens
10ZD550 55 cm Quadcopter Frame
11TAROT 6S 4108-380KV Motor
1214 inch Carbon Fiber Propeller
1340A Brushless ESC
Table 4. UGV modules.
Table 4. UGV modules.
HardwareSpecificationAmount
ServerHP Z4 G41
CPUIntel Xeon 4108 @1.8 GHz2
Memory16G DDR4-2666 MHz4
GPUNVIDIA Quadro GP100 16 GB1
DiskSATA SSD 512 GB1
Table 5. Initial Parameter combination.
Table 5. Initial Parameter combination.
ModePhase 1 ParameterPhase 2 ParameterTool
11280 × 9601280 × 960OpenCV
21280 × 9601280 × 960Pillow
3640 × 4801280 × 960Pillow
4320 × 2401280 × 960Pillow
5256 × 1921280 × 960Pillow
6128 × 961280 × 960Pillow
7128 × 96512 × 384Pillow
8128 × 96256 × 192Pillow
980 × 60256×192Pillow
Table 6. Parameter combination.
Table 6. Parameter combination.
ModePhase 1 ParameterPhase 2 ParameterToolTime Complexity
11280 × 9601280 × 960OpenCVO(bd)
21280 × 9601280 × 960PillowO(bd)
3640 × 4801280 × 960PillowO(bd)
4320 × 2401280 × 960PillowO(bd)
5256 × 1921280 × 960PillowO(bd)
6128 × 961280 × 960PillowO(bd)
7128 × 96512 × 384PillowO(bd)
Table 7. Performance comparison of path searching algorithms (second).
Table 7. Performance comparison of path searching algorithms (second).
ModeA*BA*NBA*
121.213.411.5
221.213.411.5
36.83.52.9
41.540.80.6
51.130.50.4
60.20.110.09
70.20.110.09
Average7.474.553.87
Table 8. The designated coordinates of starting and goal points for the experiments.
Table 8. The designated coordinates of starting and goal points for the experiments.
Experiment NumberStarting Point CoordinateGoal Point Coordinate
I(80, 760)(1200, 760)
II(80, 760)(1200, 480)
III(80, 760)(1200, 200)
IV(80, 480)(1200, 760)
V(80, 480)(1200, 480)
VI(80, 480)(1200, 200)
VII(80, 200)(1200, 760)
VIII(80, 200)(1200, 480)
IX(80, 200)(1200, 200)
Table 9. Comparison of the performance ratio of each mode in the experiments.
Table 9. Comparison of the performance ratio of each mode in the experiments.
ExperimentModeAverage Execution Time (s)Performance Ratio
I128.551.00
220.741.38
312.672.25
410.762.65
510.522.71
610.172.81
79.323.06
II137.001.00
229.721.25
314.772.51
411.293.28
510.793.43
610.253.61
79.553.88
III143.731.00
236.041.21
316.242.69
411.733.73
511.073.95
610.364.22
79.534.59
IV134.471.00
226.971.28
314.082.34
411.103.11
510.693.22
610.233.37
79.453.65
V140.361.00
232.501.24
315.522.60
411.523.50
510.963.68
610.353.90
79.534.24
VI135.341.00
227.651.28
314.172.49
411.043.20
510.723.30
610.263.44
79.433.75
VII143.731.00
236.341.20
316.182.70
411.573.78
510.993.98
610.344.23
79.524.59
VIII137.511.00
229.601.27
314.742.54
411.223.34
510.813.47
610.333.63
79.473.96
IX129.071.00
221.261.37
312.892.25
410.812.69
510.522.76
610.162.86
79.403.09
Table 10. Performance evaluation.
Table 10. Performance evaluation.
ModeAverage Performance Ratio
11.00
21.28
32.49
43.25
53.39
63.56
73.87
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chang, B.R.; Tsai, H.-F.; Lyu, J.-L. Drone-Aided Path Planning for Unmanned Ground Vehicle Rapid Traversing Obstacle Area. Electronics 2022, 11, 1228. https://doi.org/10.3390/electronics11081228

AMA Style

Chang BR, Tsai H-F, Lyu J-L. Drone-Aided Path Planning for Unmanned Ground Vehicle Rapid Traversing Obstacle Area. Electronics. 2022; 11(8):1228. https://doi.org/10.3390/electronics11081228

Chicago/Turabian Style

Chang, Bao Rong, Hsiu-Fen Tsai, and Jyong-Lin Lyu. 2022. "Drone-Aided Path Planning for Unmanned Ground Vehicle Rapid Traversing Obstacle Area" Electronics 11, no. 8: 1228. https://doi.org/10.3390/electronics11081228

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop