Next Article in Journal
Speed Measurement of the Moving Targets Using the Stepping Equivalent Range-Gate Method
Previous Article in Journal
Mouthguard-Type Wearable Sensor for Monitoring Salivary Turbidity to Assess Oral Hygiene
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Lane-Changing Decision Making and Planning of Autonomous Vehicles Based on GCN and Multi-Segment Polynomial Curve Optimization

1
School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100081, China
2
China North Artificial Intelligence & Innovation Research Institute, Beijing 100072, China
3
National Key Laboratory of Special Vehicle Design and Manufacturing Integration Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(5), 1439; https://doi.org/10.3390/s24051439
Submission received: 18 January 2024 / Revised: 15 February 2024 / Accepted: 20 February 2024 / Published: 23 February 2024
(This article belongs to the Section Vehicular Sensing)

Abstract

:
This paper considers the interactive effects between the ego vehicle and other vehicles in a dynamic driving environment and proposes an autonomous vehicle lane-changing behavior decision-making and trajectory planning method based on graph convolutional networks (GCNs) and multi-segment polynomial curve optimization. Firstly, hierarchical modeling is applied to the dynamic driving environment, aggregating the dynamic interaction information of driving scenes in the form of graph-structured data. Graph convolutional neural networks are employed to process interaction information and generate ego vehicle’s driving behavior decision commands. Subsequently, collision-free drivable areas are constructed based on the dynamic driving scene information. An optimization-based multi-segment polynomial curve trajectory planning method is employed to solve the optimization model, obtaining collision-free motion trajectories satisfying dynamic constraints and efficiently completing the lane-changing behavior of the vehicle. Finally, simulation and on-road vehicle experiments are conducted for the proposed method. The experimental results demonstrate that the proposed method outperforms traditional decision-making and planning methods, exhibiting good robustness, real-time performance, and strong scenario generalization capabilities.

1. Introduction

In recent years, autonomous vehicles have become one of the research hotspots in the field of vehicle engineering. Automatic driving systems typically consist of components such as environment perception modules, decision-making modules, trajectory planning modules, and tracking control modules [1]. Among them, the decision-making module is an important bridge connecting the perception module and the planning module which needs to fully extract the perception information of the driving environment and output the correct decision instructions. The planning module can output the safe and smooth planning trajectory by receiving the instructions of the decision module, both of which play a key role in the automatic driving system.
This paper designs a comprehensive framework for autonomous vehicle lane-changing decision making and trajectory planning. It employs graph convolutional networks (GCNs) to extract decision information based on driving environment data. The autonomous vehicle, in combination with the dynamic drivable area, utilizes an optimization-based multi-segment polynomial curve trajectory planning method to plan the optimal lane-changing trajectory, enabling autonomous lane changing in complex dynamic interactive environments. The contributions of this paper are summarized as follows:
(1)
Taking position, speed, and acceleration information as input, the GCN model of the driving environment is constructed, and then a vehicle decision-making method is proposed that comprehensively considers the interaction between autonomous vehicle and driving environment information, and better reflects the influence of other vehicle information on lane change decisions of the autonomous vehicle.
(2)
A method for dynamically constructing convex polygon drivable areas is proposed based on the information of the ego vehicle and surrounding vehicles’ positions, velocities, and accelerations. This provides collision-free optimization space for subsequent trajectory planning, improving planning efficiency.
(3)
An optimization-based multi-segment polynomial curve trajectory planning method is introduced. It divides a complete trajectory into equally timed segments of polynomial curves, each with different optimization objective functions. This targeted optimization of motion parameters for each segment enhances the tracking accuracy of the trajectory planning algorithm.
(4)
Through simulation and on-road vehicle experiments, the feasibility of the proposed method is verified, and the performance is good, being superior to the existing decision-making and planning methods.
The overall structure of the paper is as follows. Section 2 reviews relevant methods in the field of autonomous vehicle behavior decision making and trajectory planning. Section 3 explains the specific construction method of the behavior decision-making module based on GCN. Section 4 provides detailed insights into the optimization-based multi-segment polynomial curve trajectory planning method. Section 5 presents the results and analysis of algorithm simulations and on-road vehicle experiments. Finally, Section 7 draws conclusions.

2. Related Works

Decision-making and trajectory planning play a key role in autonomous driving systems, as it is an important guarantee for autonomous vehicles to make safe, efficient, and law-compliant driving decisions and drive safely in complex environments. A large number of studies have been carried out in related fields, and the research on decision-making and trajectory planning methods of autonomous vehicles is summarized in the following sections.

2.1. Autonomous Vehicle Decision-Making Method

As an important part of autonomous vehicles, the decision-making module plays a crucial role in its autonomous driving process. The complex interaction process between autonomous vehicles and other traffic environment information, such as obstacles, has an important impact on autonomous vehicle decision instructions. The current mainstream vehicle decision-making methods can be divided into two categories according to their principles: non-data-driven methods and data-driven methods. Non-data-driven methods include common decision-making methods such as system construction methods based on finite state machines, game theory, etc. The hierarchical finite state machine can switch the vehicle state according to the environment of the vehicle and the defined rules and can meet the state decision requirements of simple scenarios such as valet parking and fixed-point recall [2,3]. Decision algorithms based on game theory can imitate the behavior of human drivers [4]. Stackelberg game theory is applied to the lane change modeling process of autonomous vehicles to achieve human-like interaction [5]. Meanwhile, game theory is also widely used in the behavioral decision making of autonomous vehicles in driving scenarios such as on-ramp entry on highways [6]. Data-driven decision methods are mainly divided into deep learning and reinforcement learning. These include support vector machines (SVM) [7], Extreme Learning Machines (ELM) [8], reinforcement learning (RL) [9], and deep neural networks (DNNs) [10]. The convolutional neural network model is established to detect, identify, and abstract the information in the input road scene, simulate human drivers, and make human-like lane change decisions for autonomous vehicles [11]. At the same time, the SVM algorithm can also be combined with Bayesian parameter optimization to deal with lane change decision making of autonomous vehicles [12]. The sequential decision problems of lane changing and overtaking can be modeled as Markov decision processes, and reinforcement learning methods are applied to decision making [13]. Graph neural networks (GNNs) excel in handling complex traffic scenarios and have been integrated into the decision-making processes of intelligent agents [14]. In a recent study by [15], GNNs combined with Double Deep Q-learning networks demonstrated effective multi-vehicle decision making in dynamic scenarios. Moreover, GNNs are applied to unmanned aerial vehicle (UAV) decision making [16], enabling UAVs to autonomously learn visual motion control strategies for precise landings in identified areas. These findings highlight GNNs’ crucial role in enhancing scene feature extraction and shaping behavioral decision making for intelligent agents.

2.2. Autonomous Vehicle Trajectory Planning Method

Autonomous vehicle trajectory planning methods can be summarized into three types of methods, which are based on graph search, sampling, and numerical optimization. Among them, the Dijkstra algorithm and A* algorithm are the most representative graph search path planning algorithms. The Dijkstra algorithm can find the shortest path from the starting point to the target point [17,18]. The A-star (A*) algorithm introduces the concept of heuristic function in graph search and defines different weights. The path search efficiency is improved [19,20]. Sample-based trajectory planning algorithms are represented by Fast Exploring Random Tree (RRT) [21] and probabilistic road map (PRM) [22]. In order to solve the problem of large computation and slow convergence of traditional sampling methods [23], a trajectory planning algorithm based on Sparse-RRT* was proposed to reduce the computation amount, and the effectiveness of the trajectory planner was verified by experimental results. In addition, another variant of the RRT* algorithm, RRT* tunable bounds, is proposed [24] to achieve faster convergence to the optimal solution, and the simulation results support the improved performance. For the trajectory planning method based on numerical optimization [25], the MIQP method is used to divide lane change into multiple stages, thereby establishing collision constraints at each stage as hard constraints, and introducing safety distance to ensure the safety of trajectory planning. Werling et al. [26] used a model-based optimization method using the Frenet coordinate system (FCOS). They transformed the programming problem into a coordinate system based on arc length, which makes the programming problem easier to solve. Researchers [27,28] proposed a spline model-based method to plan vehicle travel paths to ensure the continuity of path curvature, and applied optimization technology to solve the path planning problem. The trajectory planning of autonomous vehicles is often intricately connected with behavioral decision making. Currently, numerous studies integrate trajectory planning modules with behavioral decision modules to form a comprehensive decision-making and planning framework. For instance, ref. [29] developed a hierarchical motion planning system for lane-changing scenarios. They employed artificial potential fields to choose the target lane for lane changing and utilized fifth-order B-spline curves and quadratic programming to generate the desired trajectory. Another approach by [30] combines subsampling and optimization in a layered motion planning method for urban roads. This method employs a sampling-based approach to construct a discrete model for decision making in the driving environment, while a lower-level planner determines micro-level motion trajectories based on the integrated environmental model.

2.3. Problems

For autonomous vehicle behavior decision-making methods, non-data-driven methods have strong interpretability and can complete established decision-making tasks according to prior rules, but they have the problem of rule completeness and cannot complete accurate decision making for the dynamic changes in driving environment outside the scope of rules. Compared with non-data-driven methods, data-driven methods can learn driving strategies from a large amount of real driving data, which are more robust to environmental changes [31] and can be applied to a wider range of scenarios. In order to facilitate the data learning of neural networks and improve the accuracy and robustness of model construction, how to describe the characteristics of various complex driving environments more accurately is the core issue of data-driven methods. For autonomous vehicle trajectory planning methods, compared with sampling and search methods, the trajectory planning method based on optimization can usually obtain better solution results, avoid complex iteration and search processes, save computing resources, and have higher algorithm efficiency. However, complex vehicle kinematics and obstacle avoidance constraints will reduce the efficiency of solving optimization problems. How to reasonably model vehicle models and collision models is one of the urgent problems in trajectory planning. Therefore, this paper adopts GCN as the construction method of a data-driven model and uses multiple pieces of information such as position, speed, and acceleration of the ego vehicle and other vehicles to build graph structure data, so as to solve the accuracy problem of environmental feature description. At the same time, a collision-free convex polygon driving area is constructed according to the dynamic information of the ego vehicle/other vehicles, and the collision-free trajectory planning can be carried out in this area to eliminate the impact of collision detection on the trajectory planning efficiency. Finally, a multi-segment polynomial curve trajectory planning method based on optimization is adopted, and different objective functions are set in different segments to achieve the planning objectives more quickly and accurately.

3. Decision Making Based on GCN

3.1. Construction of Graph-Structured Data in Dynamic Interactive Driving Scenes

In the typical scenario of automatic driving, the region of interest of the autonomous vehicle must be delimited first, so that the decision-making module can extract the scene information. Since real driving scenes are often complex and varied, and there are a large number of interactive driving behaviors between vehicles, it is difficult for traditional methods to effectively characterize such scenes. However, GCN models global information through the adjacency matrix method and reflects the influence of interactive behaviors in the form of connected edges, which is of great help to the characterization of interactions in dynamic scenes. Figure 1 shows the division of the regions of interest in the driving scene and the connection strategy of the interaction between vehicles. First, the regions of interest are constructed in a certain range of the front, back, left, and right of the ego vehicle according to the driving scene, the interaction diagram is constructed in the region of interest, and then the local sub-diagram is constructed according to the interaction object of the driving behavior of the self-driving car according to the specific lane change decision instruction, as shown in Figure 2. Thus, a comprehensive representation of the driving scene can be achieved from the two levels of the interest region and the local subgraph.
For GCN, the layer-by-layer propagation rules are as follows:
H ( l + 1 ) = σ ( D ˜ 1 2 A ˜ D ˜ 1 2 H ( l ) W ( l ) )
The vehicle is represented as a node and an adjacency matrix with N nodes is constructed:
A ˜ = A + I N
where A N × N , I N is the identity matrix, A ˜ is the adjacency matrix with added self-join, D ˜ is the degree vector of A ˜ , W ( l ) is the layer’s weight l th, σ ( ) is the activation function, and H ( l ) is the layer’s activation matrix l th.
H ( 0 ) = X
X is the feature matrix of the input node, whose data are obtained by the sensor and can be expressed as:
X = x s e l f , x o t h e r
In order to fully consider the influence of the status of the ego vehicle/other vehicles on decision instructions, the position, speed, and acceleration information of the ego vehicle/other vehicles are all taken into account in the process of constructing the interaction relationship between nodes in the input GCN diagram, where x s e l f represents the node characteristics of the ego vehicle and x o t h e r represents the node characteristics of other vehicles, which are specifically expressed as follows:
x s e l f = x 0 , y 0 , v x , v y , a x , a y
x o t h e r = x r , y r , v x i , v y i , a x i , a y i
where i 0 , 1 , , N 1 . The characteristics of the ego vehicle include the lateral distance y 0 relative to the center line of the initial lane and longitudinal distance x 0 , as well as the lateral speed v y and longitudinal speed v x and the lateral acceleration a y and the longitudinal acceleration a x . Other vehicle characteristics include the lateral distance y r and longitudinal distance x r relative to the ego vehicle, as well as the lateral speed v y i and the longitudinal speed v x i and the lateral acceleration a y i and the longitudinal acceleration a x i .
For dynamic interaction scenarios, we use the connection strategy shown in Figure 1 to characterize the interaction relationship between vehicles. First, the region of interest is modeled hierarchically, and a global interaction graph containing all vehicles in the region of interest and a local subgraph containing only interaction objects of self-driving vehicles are constructed, respectively. Self-driving vehicles are connected to all vehicles in the graph through undirected edges. Since the node feature matrix contains the relative position information between vehicles, the connected edge only represents whether there is a connected relationship between two nodes through the following formula.
a i j = 0 unconnected 1       connect
The aggregated global information is input into GCN in the form of graph structure data to output the behavior decision instructions of the autonomous vehicle, that is, the mapping relationship between the graph structure data of dynamic scenes at every moment and the optimal driving behavior instructions of the autonomous vehicle is established, and it is used in the subsequent trajectory planning module.

3.2. Dataset Construction and Network Details

In this paper, real driving data are collected by CARLA [32] simulation platform to train the network of the decision module. The dynamic driving environment under different working conditions was set for each set of data collection, and the surrounding vehicles were set to automatic driving mode by using “set_autopilot(True)” in CARLA’s PythonAPI. For the ego vehicle, the driver uses the external driving simulator shown in Figure 3 to control ego vehicles, during which the relevant driving data are recorded and saved.
For driving tasks in different dynamic environments for a certain period of time, each set of data in the dataset has been realized; the total driving data have exceeded 300 sets and the total driving mileage has exceeded 60 km. Among them, each set of data contains the perception data of the ego vehicle and other vehicles. The ego vehicle’s data include the lateral speed v y and longitudinal speed v x , the lateral distance d y relative to the center line of the lane before the lane change, and the action performed, D , that is, the behavioral decision instruction. The other vehicles’ data include the lateral distance y r and longitudinal distance x r relative to the ego vehicle, as well as the lateral speed v y i and longitudinal speed v x i and lateral acceleration a y i and longitudinal acceleration a x i .
For GCN, the model in the network was trained using the Adam optimizer [33]. The initial learning rate was set to 0.001, and the learning rate was set to 0.98 by using the exponential attenuation strategy. In order to avoid the over-smoothing phenomenon caused by too many convolutional layers of graphs, this paper sets two convolutional layers, inputs the constructed adjacency matrix A and node feature matrix X into GCN, and the activation function of each layer is a linear rectification function R e L U ( ) . The specific construction process of GCN is illustrated in Figure 4. Then, the global graph embedding is obtained by pooling the point embedding of the whole graph, and finally, the probability of each decision instruction is output by normalized exponential function S o f t m a x ( ) . The action decision command with the highest probability is the output, and the loss function is the Cross-Entropy Loss function C r o s s E n t r o p y L o s s ( ) . The detailed GCN construction parameters are summarized in Table 1.

4. Construction of Drivable Area

According to the local subgraph obtained by the decision-making module, the lane-changing scenario as shown in Figure 5 is defined, where Lane_1 is the current lane of the ego vehicle, Lane_2 is the target lane of the lane change, the middle is the dividing line of the two lanes, and Bound_1 and Bound_2 are the lower and upper boundaries of the road, respectively. Car_auto is an automatic driving vehicle, Car_1 and Car_2 are the front and rear vehicles of the current lane, respectively, and Car_3 and Car_4 are the front and rear vehicles of the target lane, respectively.
In order to ensure the safety of the trajectory, the obstacle avoidance constraints of the ego vehicle and other vehicles should be considered in the trajectory planning module. Excessively complex obstacle avoidance constraints will affect the solving speed of the optimization algorithm. If the distance between the ego vehicle and the other vehicle is directly used as a constraint, the constraint is non-convex, which increases the complexity of the trajectory planning problem. If the distance between the ego vehicle and the other vehicles is taken as part of the objective function, it is impossible to guarantee that the distance between the two vehicles is greater than 0, and it is difficult to strictly guarantee that there will be no collision between the ego vehicle and other vehicles. Therefore, this paper first considers the lane boundary and the position, speed, and acceleration information of the surrounding vehicles, establishes a dynamic convex polygon drivable area without obstacles, limits the vehicle trajectory to the drivable area, and constructs a convex constraint, thus ensuring strict non-collision between the ego vehicle and other vehicles. The drivable area is shown in Figure 6.
The types of drivable areas can be divided into two categories. When the longitudinal position of the rear boundary of vehicle Car_1 is larger than the longitudinal position of the rear boundary of vehicle Car_3, a rectangular drivable area, as shown in Figure 6a, is established. When the longitudinal position of the rear boundary of vehicle Vob1 is smaller than the longitudinal position of the rear boundary of vehicle Vob3, a pentagonal drivable area, as shown in Figure 6b, is established. The rear boundary of the drivable area is determined by Car_2 and Car_4. Since the vehicles are in the forward driving state, only the front boundary of the vehicles in front of the two rear vehicles can be taken as the back boundary of the drivable area, which can reduce the complexity of constraints without affecting the lane change planning.
In consideration of the vehicle’s inherent volume, directly assessing whether the center of the vehicle is within the aforementioned drivable area still poses a collision risk. Therefore, this paper advocates the contraction of the convex polygon, ensuring that the vehicle strictly adheres to obstacle avoidance constraints. Assuming that the vehicle’s center of mass position is the reference point, with a distance of ‘a’ to the front end and ‘b’ to the rear end from the vehicle’s center, and a vehicle width of 2d, the drivable area is shrunk based on these parameter details. The principle is illustrated in Figure 7.
The shrunken convex polygon area can be described by:
M ( t ) X ( t ) + n ( t ) 0 , t [ 0 , T ]
where X ( t ) = [ x ( t ) , y ( t ) ] T , x ( t ) and y ( t ) , respectively, represent the longitudinal position and lateral position of the vehicle at time t, and T represents the total duration of the planned trajectory. And the edges of the shrunken kyphotic polygon are determined by M ( t ) and n ( t ) .

5. Trajectory Planning of Multi-Segment Polynomial Curve Based on Optimization

5.1. Construction of Polynomial Equation of Trajectory Curve

The complete trajectory is divided into a k-segment sub-trajectory. The specific number of segments k of this sub-trajectory is not subject to fixed constraints, and the trajectory segmentation is actually determined according to the real-time requirements of the computing power and computing time of the computing platform.
The planning time of the complete trajectory is T, the planning time of all sub-trajectories is the same, and the planning time corresponding to the sub-trajectory of the i segment is Ti, and then it can be expressed as:
T i = T k ( i = 1 , 2 , 3 k )
For the longitudinal position of paragraph k, a quintic polynomial is used to describe it:
x i ( t ) = a i 0 + a i 1 t + a i 2 t 2 + a i 3 t 3 + a i 4 t 4 + a i 5 t 5
Hence, the longitudinal velocity x ˙ i ( t ) , acceleration x ¨ i ( t ) , and the rate of acceleration change x i ( t ) can be described as:
x ˙ i ( t ) = a i 1 + 2 a i 2 t + 3 a i 3 t 2 + 4 a i 4 t 3 + 5 a i 5 t 4
x ¨ i ( t ) = 2 a i 2 + 6 a i 3 t + 12 a i 4 t 2 + 20 a i 5 t 3
x i ( t ) = 6 a i 3 + 24 a i 4 t + 60 a i 5 t 2
Thus, the undetermined coefficient of the longitudinal position expression of the sub-trajectory of the i section can be expressed as a vector X i a , which can be described as follows:
X i = [ a i 0 , a i 1 , a i 2 , a i 3 , a i 4 , a i 5 ] T
Using the same method, the lateral position of paragraph k can be obtained:
y i ( t ) = b i 0 + b i 1 t + b i 2 t 2 + b i 3 t 3 + b i 4 t 4 + b i 5 t 5
And then, the lateral velocity y ˙ i ( t ) , acceleration y ¨ i ( t ) , and the rate of acceleration change y i ( t ) can be described as:
y ˙ i ( t ) = b i 1 + 2 b i 2 t + 3 b i 3 t 2 + 4 b i 4 t 3 + 5 b i 5 t 4
y ¨ i ( t ) = 2 b i 2 + 6 b i 3 t + 12 b i 4 t 2 + 20 b i 5 t 3
y i ( t ) = 6 b i 3 + 24 b i 4 t + 60 b i 5 t 2
Finally, the undetermined coefficient of the lateral position expression of the sub-trajectory of the i section can be expressed as a vector X i b , which can be described as follows:
X i b = [ b i 0 , b i 1 , b i 2 , b i 3 , b i 4 , b i 5 ] T

5.2. Parameter Optimization of Polynomial Trajectory Curve

In order to ensure the high real-time requirements of the trajectory planning algorithm, this paper will build a quadratic programming optimization model for trajectory planning problems, give full play to the advantages of high solving efficiency of the quadratic programming model, and optimize the lateral/longitudinal trajectory expression parameters. The standard form of the quadratic programming model is built as follows:
min J = X T Q X + 2 q T X s . t . G x h                 A X = b
where X = [ X 0 a T , X 1 a T X k a T , X 0 b T , X 1 b T X k b T ] 1 × 12 k T . It is used as the optimization parameter of the lateral and longitudinal trajectory optimization model.

5.2.1. Cost Function

In this paper, a multi-segment polynomial trajectory planning method is adopted. Different cost functions will be set for different subsegment trajectories, which can be optimized according to the specific parameters of different subsegment trajectories to obtain higher-quality trajectories.
In order to facilitate the description, the cost function is divided into a lateral cost function and longitudinal cost function. The cost function expression of the sub-trajectory in the i section is as follows:
J i x = J i x _ T + J i x _ C ( i = 1 , 2 , 3 k )
J i y = J i y _ T + J i y _ C ( i = 1 , 2 , 3 k )
where J i x is the cost function of the longitudinal position of the sub-trajectory in the i section, which includes longitudinal moving target cost J i x _ T and longitudinal comfort cost J i x _ C . Similarly, J i y is the cost function of the lateral position of the sub-trajectory in the i section, which includes lateral moving target cost J i y _ T and lateral comfort cost J i y _ C .
The moving target includes four parts: longitudinal target position x t a r g e t , longitudinal target speed v t a r g e t , lateral target position y t a r g e t , and lateral target speed 0. From this, we can obtain the concrete expression of the target cost of longitudinal/lateral motion:
J i x _ T = j = 0 j = N ( ω i x _ p o s ( x i ( t j ) x t a r g e t ) 2 + ω i x _ s p e e d ( x ˙ i ( t j ) v t a r g e t ) 2 )
J i y _ T = j = 0 j = N ( ω i y _ p o s ( y ( t l ) y t a r g e t ) 2 + ω i y _ s p e e d y ˙ p 2 ( t l ) )
where N is the number of trajectory points of the sub-trajectory longitudinal position curve, which can be calculated by N = T i / Δ t ; t j is the time corresponding to the j-th trajectory point, which can be calculated by t j = j Δ t . The weight coefficient for the cost function of the longitudinal position curve of the i-th segment of the sub-trajectory is denoted as ω i x _ p o s and ω i x _ s p e e d . The weight coefficient for the cost function of the lateral position curve of the i-th segment of the sub-trajectory is denoted as ω i y _ p o s and ω i y _ s p e e d .
For the comfort cost, the comfort of the trajectory is mainly related to the rate of acceleration change. Therefore, we take the sum of squares of the rate of acceleration change of the longitudinal trajectory as the longitudinal comfort cost J i x _ C and the sum of squares of the rate of acceleration change of the lateral trajectory as the lateral comfort cost. The weight coefficient for the cost function of the comfort cost of the i-th segment of the sub-trajectory is denoted as ω i x _ j e r k and ω i y _ j e r k . The specific expression is as follows:
J i x _ C = j = 0 j = N ( ω i x _ j e r k x i 2 ( t j ) )
J i y _ C = j = 0 j = N ( ω i y _ j e r k y i 2 ( t j ) )
Using (20) and (21), the complete cost function of multi-segment polynomial parameter optimization can be expressed as:
J = i = 0 i = k J i x + J i y = i = 0 i = k J i x _ T + J i x _ C + J i y _ T + J i y _ C
Using (23)–(26), the specific expression of the cost function is as follows:
J = i = 0 i = k j = 0 j = N ( ω i x _ p o s ( x i ( t j ) x t a r g e t ) 2 + ω i x _ s p e e d ( x ˙ i ( t j ) v t a r g e t ) 2 ) + j = 0 j = N ( ω i x _ j e r k x i 2 ( t j ) ) + j = 0 j = N ( ω i y _ p o s ( y ( t l ) y t a r g e t ) 2 + ω i y _ s p e e d y ˙ p 2 ( t l ) ) + j = 0 j = N ( ω i y _ j e r k y i 2 ( t j )
The optimization priorities of different motion parameters can be modified by adjusting the size of ω i x _ p o s , ω i x _ s p e e d , ω i y _ p o s , ω i y _ s p e e d , ω i x _ j e r k , and ω i y _ j e r k , so as to achieve the goal of emphasizing the optimization of specific parameters in different sub-trajectories.

5.2.2. Constraints

(1)
Continuity constraint
In order to ensure the continuity of autonomous vehicle motion, the lateral/longitudinal motion parameters of the initial point of the output trajectory must be consistent with the motion state of the current position of the vehicle to ensure the continuity of the four motion parameters of the initial point position, speed, acceleration, and rate of acceleration change, which are specifically expressed as follows:
x 1 ( 0 ) = x 0 x ˙ 1 ( 0 ) = x ˙ 0 x ¨ 1 ( 0 ) = x ¨ 0 x 1 ( 0 ) = x 0 y 1 ( 0 ) = y 0 y ˙ 1 ( 0 ) = y ˙ 0 y ¨ 1 ( 0 ) = y ¨ 0 y 1 ( 0 ) = y 0
where x 1 ( 0 ) , x ˙ 1 ( 0 ) , x ¨ 1 ( 0 ) , and x 1 ( 0 ) are the initial longitudinal position, velocity, acceleration, and rate of acceleration change of the sub-trajectory of the first section. x 0 , x ˙ 0 , x ¨ 0 , and x 0 are the current longitudinal position, velocity, acceleration, and rate of acceleration change of the sub-trajectory. Using the same definition, the expression of the continuity constraint of the initial lateral point can be obtained, as shown in (29).
In order to make a smooth transition between k trajectories based on the optimized multi-segment polynomial curve trajectory planning method, it is necessary to ensure continuous motion parameters between subsegments, which means the motion parameters between the end point of the previous segment and the initial point of the next segment must be equal. We mainly consider the continuity of four motion parameters: position of connection point, speed, acceleration, and acceleration change rate, which are specifically expressed as:
x i ( T i ) x i + 1 ( 0 ) = 0 x ˙ i ( T i ) x ˙ i + 1 ( 0 ) = 0 x ¨ i ( T i ) x ¨ i + 1 ( 0 ) = 0 x i ( T i ) x i + 1 ( 0 ) = 0 y i ( T i ) y i + 1 ( 0 ) = 0 y ˙ i ( T i ) y ˙ i + 1 ( 0 ) = 0 y ¨ i ( T i ) y ¨ i + 1 ( 0 ) = 0 y i ( T i ) y i + 1 ( 0 ) = 0 i = 1 , 2 , 3 k 1
(2)
Security constraint
In the process of trajectory planning, it is necessary to ensure that the trajectory has no collision with the obstructed vehicle and does not exceed the road boundary range. In Section 4, safety constraints have been constructed for parameter optimization of multi-segment polynomials, and the position of the vehicle centroid is constrained in the shrunken drivable area. The collision-free trajectory can be generated by taking the centroid of the ego vehicle as the planning starting point. The influence of complex constraints on the optimization problem is reduced. The specific constraints are expressed as follows:
M i ( t ) X i ( t ) + n i ( t ) 0 , t [ 0 , T i ]
where X i ( t ) = [ x i ( t ) , y i ( t ) ] T , and x i ( t ) and y i ( t ) represent the longitudinal position and lateral position of the vehicle at time t of the sub-trajectory of section i, respectively. T i represents the total time range of the sub-trajectory. The edges of the drivable area after contraction within the time range of the sub-trajectory of section i are determined by M i ( t ) and n i ( t ) .
The lane change motion planning problem can be established as a standard quadratic programming problem, as mentioned in Equation (20), and the special model is shown in Equation (32). By solving the quadratic programming problem, the optimal trajectory can be obtained.
min ( 28 ) s . t . ( 29 ) , ( 30 ) , ( 31 )

6. Simulation and On-Road Vehicle Experiments

6.1. Simulation Experiment

In this paper, the proposed method is verified by simulation experiments. The experiment was carried out on the CARLA simulation platform under UBUNTU. CARLA is an open-source simulator based on Unreal Engine 4, which has realistic simulation scenes and can have various sensors set.
In the simulation experiment, several other vehicles are first set, and each vehicle is set to the automatic driving mode using the relevant API of the simulation platform. The speed information and position information of other vehicles are obtained through the simulation platform for decision-making and planning algorithms. The driving behavior of the ego vehicle in different driving scenarios is recorded to evaluate the method proposed in this paper. In this paper, OSQP [34] was used to solve the optimization model constructed in Section 5 to obtain the optimal trajectory. Finally, relevant graphs are generated based on the simulated vehicle motion data and the corresponding computational time information. The overall framework of the simulation experiment is illustrated in Figure 8.
As shown in Figure 9, the ego vehicle is driving at a speed of 20 km/h in the current lane, and the speed of the front and rear vehicles in the current lane is 20 km/h. Moreover, the front vehicle in the current lane is undergoing uniform deceleration with an acceleration of −1 m/s2. The speed of the front and rear vehicles in the target lane is 30 km/h. The vehicle needs to change lanes from the current lane to the target lane and travel at a target speed of 30 km/h.
The proposed method is compared with the polynomial sampling method in simulation experiments. Considering that too many sampling tracks will lead to too long of a calculation time, it is difficult to ensure real-time performance. Therefore, the method based on polynomial sampling in this paper has one time sampling layer and lateral migration layer, six target velocity sampling layers, fifty control time domains, and 0.1 s sampling time, that is, the trajectory of the future 5.0 s is planned. Both approaches are implemented in Python3.11. The simulation experiment data are shown in Figure 10, Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15:
Due to the fact that sampling-based methods can only constrain the terminal states of the samples, the motion parameter information for times other than the final moment in the entire trajectory is generated solely through polynomial fitting. This makes it challenging to control the motion parameters during the lane-changing process, hindering the optimal utilization of vehicle performance and meeting other requirements of the driving process as much as possible. As shown in Figure 10, compared with the method based on polynomial sampling, the method proposed in this paper has a faster lane change time of about 7 s. Furthermore, the proposed method achieves the target speed about 3 s faster than the method based on sampling (as shown in Figure 11). Figure 12 shows the variation trend of the lateral speed of the two methods over time. The time for the method in this paper to reach the maximum lateral speed is about 2 s, and the maximum speed is more than 1 km/h higher than that of the method based on sampling. As can be seen from Figure 13 and Figure 14, the proposed method can affect the acceleration and deceleration performance of the vehicle to a greater extent and reach the target state faster on the premise of ensuring the continuous acceleration of the vehicle.
In addition, in terms of calculation time, each trajectory sampled by the sampling-based method requires collision detection of obstacles, and the more sampling paths, the lower the planning efficiency, which greatly increases the calculation time. As shown in Figure 15, the calculation time of the proposed method is almost an order of magnitude lower than that of the sample-based method. The average time of the polynomial-based method per cycle is about 0.058 s, while the average time of the proposed method is less than 0.02 s.
The specific changes in major performance parameters are shown in Table 2. Compared with the sampling-based algorithm, the lane-changing duration reduced by 23%, while the time to reach the longitudinal target speed and maximum lateral speed decreased by 41% and 19%, respectively. It can be seen that the method proposed in this paper has a significant advantage in lane-changing efficiency. Additionally, the ranges of longitudinal and lateral acceleration variations increased by 95% and 63%, respectively, indicating that the proposed method can better utilize the vehicle’s acceleration and deceleration performance within the allowed range. Lastly, the average time cost decreased by 76%. Lower time cost means faster response to environmental changes. In conclusion, the optimization-based approach proposed in this paper is demonstrated to yield improvements in various aspects of performance compared to the sampling-based method.

6.2. On-Road Vehicle Experiments

In addition to simulation experiments, the performance of the method is further verified by on-road vehicle experiments. As shown in Figure 16, the on-road vehicle experiment scene shows that the ego vehicle is driving in the current lane at a speed of 20 km/h, the speed of the front vehicle and the rear vehicle in the current lane is 10 km/h, and the speed of the target lane is 25 km/h. The car needs to change lanes from the current lane to the target lane and travel at a target speed of 25 km/h.
The on-road vehicle experimental platform selected is the Wuling Hongguang Mini EV (SGMW, Liuzhou, China) converted to a wire-controlled platform. It is equipped with a Yanhua MIC-770-V3 (Advantech, Hongkong, China) industrial computer running the Ubuntu operating system, featuring an Intel i9-12900TE CPU (Intel, Santa Clara, CA, USA) and an RTX4070ti graphics card (NVIDIA, Santa Clara, CA, USA). Additionally, the platform is equipped with multiple sensors for environmental perception. The wire-controlled chassis facilitates the validation of various algorithms. A photograph of the actual vehicle platform is shown in Figure 17.
For the on-road vehicle experiment, the vehicle driven by the real driver is regarded as the other car, and the driver executes different driving behaviors to verify the performance of the algorithm. The self-driving car obtains the speed and position information of the other car through the on-board sensor for decision-making planning. During this period, the motion data of the vehicle were recorded and saved by the GW-NAV100B MEMS inertial/satellite integrated navigation system (Beijing Guo Wei Xing Tong Technology Co., Ltd., Beijing, China) for subsequent analysis. The parameters of the integrated navigation system are shown in Table 3. Experimental data are shown in Figure 18, Figure 19, Figure 20, Figure 21 and Figure 22.
Figure 18 shows the lane change trajectory of the vehicle in the global coordinate system. It can be seen that the lane change trajectory of the vehicle in the experiment is gentle and can be accurately converted to the designated lane. For the longitudinal acceleration process of the vehicle, it takes about 4.5 s for the vehicle to accelerate to the longitudinal target speed, and the target speed can be steadily followed (as shown in Figure 19). As can be seen from Figure 20, in the on-road vehicle experiment, it only takes about 2 s for the lateral speed to increase from 0 to the maximum value, and it decreases to zero at about 6.5 s. In other words, the lane change target is completed at about 6.5 s. As shown in Figure 21 and Figure 22, both lateral and longitudinal acceleration of the vehicle ensure smooth transition and are within the required range to ensure driving comfort.

7. Conclusions

In response to the autonomous lane-changing problem in complex dynamic interactive driving environments, this paper proposes an autonomous vehicle lane-changing decision-making and trajectory planning method based on graph convolutional networks (GCNs) and multi-segment polynomial curve optimization. The method constructs a GCN model for the graph-structured data of the driving environment, establishing a decision-making approach that comprehensively considers the interaction between autonomous vehicles and driving environment information. Simultaneously, the paper creates a dynamic convex polygon collision-free drivable area based on the position, velocity, and acceleration information of the ego vehicle and surrounding vehicles in the driving environment. Finally, an optimization-based multi-segment polynomial curve trajectory planning method is introduced, achieving targeted optimization of multiple trajectory segments within the collision-free drivable area.
This approach better reflects the impact of other vehicles on the autonomous lane-changing decisions of autonomous vehicles, enhances trajectory planning efficiency, and optimizes the motion parameters of polynomial trajectory curves for each segment according to the motion goal, ensuring safe and smooth motion trajectories and accurate realization of the motion goal. It addresses the limitations of completeness and generalization in traditional decision algorithms in complex dynamic environments. Experimental verification is conducted on both simulation and on-road vehicle experiments, demonstrating the feasibility of the proposed method and its superior performance compared to existing decision-making and planning methods.
In future work, this method could be extended to other scenarios such as ramp merging and intersections, and through more generalized modeling, be applied to a broader range of scenarios.

Author Contributions

Conceptualization, F.F.; methodology, F.F. and C.W.; software, F.F.; validation, C.W.; formal analysis, F.F. and C.W.; Visualization, Y.L. and Y.H.; Resources, B.Z. and Y.H.; writing—original draft preparation, F.F.; writing—review and editing, C.W., B.Z., Y.L. and Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China: 52002026.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yurtsever, E.; Lambert, J.; Carballo, A.; Takeda, K. A Survey of Autonomous Driving: Common Practices and Emerging Technologies. IEEE Access 2020, 8, 58443–58469. [Google Scholar] [CrossRef]
  2. Ji, J.; Huang, Y.J.; Li, Y.W.; Wu, F. Decision-making analysis of vehicle autonomous driving behaviors for autonomous vehi-cles based on finite state machine. Automot. Technol. 2018, 12, 1–7. [Google Scholar] [CrossRef]
  3. Xia, X.T.; Pei, X.F.; Yu, J. Behavior planning and control of intelligence commercial vehicle based on finite state machine. In Proceedings of the Annual Meeting of the Chinese Society of Automotive Engineering, Shanghai, China, 19–21 October 2021; pp. 76–82. [Google Scholar] [CrossRef]
  4. Yu, H.; Tseng, H.E.; Langari, R. A human-like game theory-based controller for automatic lane changing. Transp. Res. Part C Emerg. Technol. 2018, 88, 140–158. [Google Scholar] [CrossRef]
  5. Yoo, J.H.; Langari, R. Stackelberg Game Based Model of Highway Driving. In Proceedings of the Asme Dynamic Systems & Control Conference Joint with the Jsme Motion & Vibration Conference, Fort Lauderdale, FL, USA, 17–19 October 2012. [Google Scholar] [CrossRef]
  6. Wei, C.; He, Y.; Tian, H.; Lv, Y. Game Theoretic Merging Behavior Control for Autonomous Vehicle at Highway On-Ramp. IEEE Trans. Intell. Transp. Syst. 2022, 23, 21127–21136. [Google Scholar] [CrossRef]
  7. Zhang, J.; Liao, Y.; Wang, S.; Han, J. Study on Driving Decision-Making Mechanism of Autonomous Vehicle Based on an Optimized Support Vector Machine Regression. Appl. Sci. 2017, 8, 13. [Google Scholar] [CrossRef]
  8. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
  9. Peters, M.; Ketter, W.; Saar-Tsechansky, M.; Collins, J. A reinforcement learning approach to autonomous decision-making in smart electricity markets. Mach. Learn. 2013, 92, 5–39. [Google Scholar] [CrossRef]
  10. Costilla-Reyes, O.; Scully, P.; Ozanyan, K.B. Deep Neural Networks for Learning Spatio-Temporal Features from Tomography Sensors. IEEE Trans. Ind. Electron. 2017, 65, 645–653. [Google Scholar] [CrossRef]
  11. Li, L.; Ota, K.; Dong, M. Humanlike Driving: Empirical Decision-Making System for Autonomous Vehicles. IEEE Trans. Veh. Technol. 2018, 67, 6814–6823. [Google Scholar] [CrossRef]
  12. Liu, Y.; Wang, X.; Li, L.; Cheng, S.; Chen, Z. A Novel Lane Change Decision-Making Model of Autonomous Vehicle Based on Support Vector Machine. IEEE Access 2019, 7, 26543–26550. [Google Scholar] [CrossRef]
  13. Xu, X.; Zuo, L.; Li, X.; Qian, L.; Ren, J.; Sun, Z. A Reinforcement Learning Approach to Autonomous Decision Making of Intelligent Vehicles on Highways. IEEE Trans. Syst. Man Cybern. Syst. 2018, 50, 3887–3897. [Google Scholar] [CrossRef]
  14. Dong, Q.; Jiang, T.; Xu, T.; Liu, Y. Graph-based Planning-informed Trajectory Prediction for Autonomous Driving. In Proceedings of the 2022 6th CAA International Conference on Vehicular Control and Intelligence (CVCI), Nanjing, China, 28–30 October 2022; pp. 1–6. [Google Scholar] [CrossRef]
  15. Gao, X.; Luan, T.; Li, X.; Liu, Q.; Li, Z.; Yang, F. Multi-Vehicles Decision-Making in Interactive Highway Exit: A Graph Rein-forcement Learning Approach. In Proceedings of the 2022 IEEE 17th Conference on Industrial Electronics and Applications (ICIEA), Chengdu, China, 16–19 December 2022; pp. 534–539. [Google Scholar] [CrossRef]
  16. Zhang, Y.; Li, J.; Chen, J.; Liu, Z.; Chen, Z. A GCN-Based Decision-Making Network for Autonomous UAV Landing. In Proceedings of the 2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM), Shenzhen, China, 18–21 December 2020; pp. 652–657. [Google Scholar] [CrossRef]
  17. Zhang, Y.; Li, J.; Chen, J.; Liu, Z.; Chen, Z. A Practical Approach to Robotic Design for the DARPA Urban Challenge; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar] [CrossRef]
  18. Luo, M.; Hou, X.; Yang, J. Surface Optimal Path Planning Using an Extended Dijkstra Algorithm. IEEE Access 2020, 8, 147827–147838. [Google Scholar] [CrossRef]
  19. Shang, E.; Dai, B.; Nie, Y.; Zhu, Q.; Xiao, L.; Zhao, D. A Guide-line and Key-point based A-star Path Planning Algorithm for Autonomous Land Vehicles. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 20–23 September 2020. [Google Scholar] [CrossRef]
  20. Erke, S.; Bin, D.; Yiming, N.; Qi, Z.; Liang, X.; Dawei, Z. An improved A-Star based path planning algorithm for autonomous land vehicles. Int. J. Adv. Robot. Syst. 2020, 17, C591–C617. [Google Scholar] [CrossRef]
  21. Yuan, C.R.; Liu, G.F.; Zhang, W.Q.; Pan, X.L. An efficient RRT cache method in dynamic environments for path planning. Robot. Auton. Syst. 2020, 131, 103595. [Google Scholar] [CrossRef]
  22. Cao, K.; Cheng, Q.; Gao, S.; Chen, Y.; Chen, C. Improved PRM for Path Planning in Narrow Passages. In Proceedings of the 2019 IEEE International Conference on Mechatronics and Automation (ICMA), Tianjin, China, 4–7 August 2019. [Google Scholar] [CrossRef]
  23. Arab, A.; Yu, K.; Yi, J.; Song, D. Motion planning for aggressive autonomous vehicle maneuvers//Conference on Automation Science and Engineering. In Proceedings of the 2016 IEEE International Conference on Automation Science and Engineering (CASE), Fort Worth, TX, USA, 21–25 August 2016. [Google Scholar] [CrossRef]
  24. Noreen, I.; Khan, A.; Ryu, H.; Doh, N.L.; Habib, Z. Optimal path planning in cluttered environment using RRT*-AB. Intell. Serv. Robot. 2018, 11, 41–52. [Google Scholar] [CrossRef]
  25. Miller, C.; Pek, C.; Althoff, M. Efficient Mixed-Integer Programming for Longitudinal and Lateral Motion Planning of Autonomous Vehicles. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018. [Google Scholar] [CrossRef]
  26. Gutjahr, B.; Groll, L.; Werling, M. Lateral Vehicle Trajectory Optimization Using Constrained Linear Time-Varying MPC. IEEE Trans. Intell. Transp. Syst. 2016, 18, 1586–1595. [Google Scholar] [CrossRef]
  27. Zips, P.; Bock, M.; Kugi, A. A fast motion planning algorithm for car parking based on static optimization. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013. [Google Scholar] [CrossRef]
  28. Zips, P.; Bock, M.; Kugi, A. Optimization based path planning for car parking in narrow environments. Robot. Auton. Syst. 2016, 79, 1–11. [Google Scholar] [CrossRef]
  29. Yan, Y.; Peng, L.; Wang, J.; Zhang, H.; Shen, T.; Yin, G. A Hierarchical Motion Planning System for Driving in Changing Environments: Framework, Algorithms, and Verifications. IEEE/ASME Trans. Mechatron. 2023, 28, 1303–1314. [Google Scholar] [CrossRef]
  30. Lim, W.; Lee, S.; Sunwoo, M.; Jo, K. Hierarchical Trajectory Planning of an Autonomous Car Based on the Integration of a Sampling and an Optimization Method. IEEE Trans. Intell. Transp. Syst. 2018, 19, 613–626. [Google Scholar] [CrossRef]
  31. Bojarski, M.; Del Testa, D.; Dworakowski, D.; Firner, B.; Flepp, B.; Goyal, P.; Jackel, L.D.; Monfort, M.; Muller, U.; Zhang, J. End to End Learning for Self-Driving Cars. arXiv 2016, arXiv:1604.07316. [Google Scholar] [CrossRef]
  32. Dosovitskiy, A.; Ros, G.; Codevilla, F.; Lopez, A.; Koltun, V. CARLA: An Open Urban Driving Simulator. arXiv 2017, arXiv:1711.03938. [Google Scholar] [CrossRef]
  33. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar] [CrossRef]
  34. Stellato, B.; Banjac, G.; Goulart, P.; Bemporad, A.; Boyd, S. OSQP: An Operator Splitting Solver for Quadratic Programs. Math. Program. Comput. 2020, 12, 637–672. [Google Scholar] [CrossRef]
Figure 1. Region of interest and intervehicle connectivity strategy.
Figure 1. Region of interest and intervehicle connectivity strategy.
Sensors 24 01439 g001
Figure 2. Local subgraph and intervehicle connectivity strategy.
Figure 2. Local subgraph and intervehicle connectivity strategy.
Sensors 24 01439 g002
Figure 3. Simulation experiment driving simulator.
Figure 3. Simulation experiment driving simulator.
Sensors 24 01439 g003
Figure 4. Illustration of GCN structure.
Figure 4. Illustration of GCN structure.
Sensors 24 01439 g004
Figure 5. Illustration of the lane-changing scenario.
Figure 5. Illustration of the lane-changing scenario.
Sensors 24 01439 g005
Figure 6. Illustration of the drivable area. (a) Rectangular drivable area. (b) Pentagonal drivable area.
Figure 6. Illustration of the drivable area. (a) Rectangular drivable area. (b) Pentagonal drivable area.
Sensors 24 01439 g006
Figure 7. Illustration of the shrunken drivable area.
Figure 7. Illustration of the shrunken drivable area.
Sensors 24 01439 g007
Figure 8. Overall framework of the simulation experiment.
Figure 8. Overall framework of the simulation experiment.
Sensors 24 01439 g008
Figure 9. Illustration of simulation experiment lane-changing scene.
Figure 9. Illustration of simulation experiment lane-changing scene.
Sensors 24 01439 g009
Figure 10. Comparison of lane-changing trajectories.
Figure 10. Comparison of lane-changing trajectories.
Sensors 24 01439 g010
Figure 11. Comparison of longitudinal velocities.
Figure 11. Comparison of longitudinal velocities.
Sensors 24 01439 g011
Figure 12. Comparison of lateral velocities.
Figure 12. Comparison of lateral velocities.
Sensors 24 01439 g012
Figure 13. Comparison of longitudinal accelerations.
Figure 13. Comparison of longitudinal accelerations.
Sensors 24 01439 g013
Figure 14. Comparison of lateral accelerations.
Figure 14. Comparison of lateral accelerations.
Sensors 24 01439 g014
Figure 15. Comparison of time costs.
Figure 15. Comparison of time costs.
Sensors 24 01439 g015
Figure 16. Illustration of on-road vehicle experiment scene.
Figure 16. Illustration of on-road vehicle experiment scene.
Sensors 24 01439 g016
Figure 17. Mini EV wire-controlled platform.
Figure 17. Mini EV wire-controlled platform.
Sensors 24 01439 g017
Figure 18. Trajectory of on-road vehicle experiment.
Figure 18. Trajectory of on-road vehicle experiment.
Sensors 24 01439 g018
Figure 19. Longitudinal velocity of experiment.
Figure 19. Longitudinal velocity of experiment.
Sensors 24 01439 g019
Figure 20. Lateral velocity of experiment.
Figure 20. Lateral velocity of experiment.
Sensors 24 01439 g020
Figure 21. Longitudinal acceleration of experiment.
Figure 21. Longitudinal acceleration of experiment.
Sensors 24 01439 g021
Figure 22. Lateral acceleration of experiment.
Figure 22. Lateral acceleration of experiment.
Sensors 24 01439 g022
Table 1. GCN construction parameters.
Table 1. GCN construction parameters.
OptimizerAdam
Initial learning rate0.001
Attenuation rate0.98
Activation function R e L U ( )
Output function S o f t m a x ( )
Loss function C r o s s E n t r o p y L o s s ( )
Table 2. Variations in major performance parameters.
Table 2. Variations in major performance parameters.
Parameter NameSampling-BasedOptimization-BasedVariation
Duration of lane change (Total)7.1 s5.5 s↓23%
Duration to Achieve Longitudinal Target Velocity6.9 s4.1 s↓41%
Duration to Achieve Lateral Target Velocity2.7 s2.2 s↓19%
Range of Longitudinal Acceleration Variation0~0.5756 m/s20~1.1208 m/s2↑95%
Range of Lateral Acceleration Variation−0.3062~0.6181 m/s2−0.4485~0.9565 m/s2↑63%
Average Time Cost0.058 s0.014 s↓76%
Table 3. Parameters of GW-NAV100B MEMS inertial/satellite integrated navigation system.
Table 3. Parameters of GW-NAV100B MEMS inertial/satellite integrated navigation system.
GW-NAV100B MEMS Inertial/Satellite Integrated Navigation System
RTK accuracy (RMS)Horizontal: 1.5 cm + 1 ppm;
Vertical: 1.5 cm + 1 ppm
Orientation accuracy (RMS)0.2°/m
Velocity accuracy (RMS)0.03 m/s
Accelerometer zero bias stability<0.2 mg
Accelerometer zero bias repeatability<0.2 mg
Data update frequency20 Hz
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Feng, F.; Wei, C.; Zhao, B.; Lv, Y.; He, Y. Research on Lane-Changing Decision Making and Planning of Autonomous Vehicles Based on GCN and Multi-Segment Polynomial Curve Optimization. Sensors 2024, 24, 1439. https://doi.org/10.3390/s24051439

AMA Style

Feng F, Wei C, Zhao B, Lv Y, He Y. Research on Lane-Changing Decision Making and Planning of Autonomous Vehicles Based on GCN and Multi-Segment Polynomial Curve Optimization. Sensors. 2024; 24(5):1439. https://doi.org/10.3390/s24051439

Chicago/Turabian Style

Feng, Fuyong, Chao Wei, Botong Zhao, Yanzhi Lv, and Yuanhao He. 2024. "Research on Lane-Changing Decision Making and Planning of Autonomous Vehicles Based on GCN and Multi-Segment Polynomial Curve Optimization" Sensors 24, no. 5: 1439. https://doi.org/10.3390/s24051439

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop