*1.2. AGV Localization Algorithms*

To navigate autonomously and safely, the AGV needs to be able to locate its position in its environment. Consequently, the localization problem has been studied and various techniques are proposed to solve the localization problem [13]. The simple form for localization is to use odometry methods, which provide the current position from odometry information estimated by velocity and rotation of wheels (wheel odometry), inertial measurement units (IMU odometry), laser source (laser odometry) or images (visual odometry), etc. For instance, a free-sensor LiDAR-based odometry method [14] integrated the LiDAR-only odometry (LOAM) algorithm to estimate odometry then segment the local map by Convolutional Neural Network (CNN) before using a two-stage RANSAC for verifying the position matches in the local map. Moreover, Zhao et al. [15] proposed a multi-model sensor fusion framework that uses different tightly coupled and loosely coupled optimization methods around the primary IMU odometry factors and can work in several challenging environments.

In contrast, the Simultaneous Localization And Mapping (SLAM) technology consists of the map building process and the localization process. In [16], the authors enhanced the localization method using least square-based geometric matching to compensate for the predicted position. Using 2D LiDAR scan, Millance et al. [17] use a Determinant of Hessian-based detector to find points of high curvature on the Signed Distance Function (SDF) for place recognition. Although the LiDAR-based SLAM method provides helpful information to determine free-space regions and characterizes places for localization, it seems inefficient in structure-less environments, e.g., long corridors, tunnels, dusty or foggy areas, etc. On the other hand, the Sensor-based odometry method proves their accuracy and robustness in various scenarios, even in challenging environments.

Currently, modern image classification systems based on deep neural networks, including Inception V3 [18] and YOLO V3 [19], are more accurate than traditional machine learning classification methods. In general, mobile robots are usually equipped with LiDAR for robot localization due to its accuracy, speed, and 3D reconstruction ability. Therefore, a deep neural network can extract the features of the LiDAR point-cloud data. For example, Chen et al. [20] extracted 2D LiDAR features and used SVM to recognize front pedestrians and track them. However, in the repetitive pattern environment, e.g., in the long corridor, where LiDAR point clouds are sparse to collect. As the result, it is challenging to localize precisely the AGV position, leading to mislocalization or the kidnapped-robot problem. When a mobile robot fails to localize itself due to sparse LiDAR point-cloud, some methods are developed to relocate AGV's position. In the SLAM localization system, it localizes AGV's position through Monte Carlo Localization (MCL) method, which takes a long time and is not helpful in broad-space scenarios. Therefore, Wi-Fi fingerprinting was proposed to solve the problem of robot kidnapping [21], and MCL was integrated with the Fast Library for Approximate Nearest Neighbors (FLANN) machine learning technology to solve this problem [22].

#### *1.3. Contributions*

Motivated by discussion above, this paper focused on control movement ability of the AGV on curve path and localizing the AGV on the localization system in the structure-less environment (long corridor). The main contributions of our work are as follows:


The remain of this paper is organized as follows. In Section 2, the hardware platform and vehicle kinematic of robot system are described. In Section 3, the improved Pure Pursuit algorithm using turning prediction-based speed adjustment is introduced. the deep learning-based selection strategy using 2D LiDAR point-cloud features for localization task is discussed Section 4. In Section 5, the practical experimental results and verification of the proposed method is reported. Finally, in Section 6, the conclusions are presented.

#### **2. Robot System**

The mobile robot has four differential wheels that use two motors on both left and right sides. In addition, the hardware platform is equipped with two LiDAR systems that can obtain 360-degree point-cloud information in the front and back of the robot for SLAM [23]. The schematic diagram of our mobile robot hardware platform is shown in Figure 1.

**Figure 1.** The schematic diagram of mobile robot hardware platform.

#### **3. Design of Trajectory-Tracking System**

The mobile robot in this paper is driven in a differential-wheel mode. The left and right wheels are related to the overall velocity and angular velocity of the mobile robot. The coordinate system of the mobile robot is shown in Figure 2, where (*x*, *y*) is the location of the mobile robot, *L* is the distance between the left and right wheels, *θ* is the angle between the mobile robot and the X-axis, *υ<sup>R</sup>* is the velocity of the right wheel, *υ<sup>L</sup>* is the velocity of the left wheel, *υ* is the velocity of the mobile robot and *ω* is the angular velocity of the mobile robot. The kinematic model of the differential wheel is as follows:

$$
\mathfrak{x} = \mathfrak{v}\cos\theta \tag{1}
$$

$$y = \upsilon \cos \theta \tag{2}$$

$$
\omega = \frac{\upsilon\_R - \upsilon\_L}{L} \tag{3}
$$

$$\upsilon = \frac{\upsilon\_R + \upsilon\_L}{2} \tag{4}$$

$$
v\_L = v - \frac{L\omega}{2}\tag{5}$$

**Figure 2.** The schematic diagram of the differential-wheel model.
