1. Introduction
In scenarios such as intelligent manufacturing factories and logistics and transportation systems, automated guided vehicles (AGVs) have been widely used as the main tools for intelligent material transportation [
1]. AGV path planning technology refers to AGVs that autonomously generate safe and feasible paths based on evaluation criteria such as travel time, path length, and turning frequency, combined with their own sensors’ perception of the environment [
2,
3].
The use of traditional genetic algorithms in path planning typically involves identifying the most efficient route from a starting point to a destination. The fundamental concept of traditional genetic algorithm path planning is to search for the optimal path by simulating the process of genetic evolution in nature through population evolution and fitness evaluation. The general steps and characteristics of traditional genetic algorithm path planning are as follows: Firstly, the path planning problem must be transformed into an optimization problem that can be handled by a genetic algorithm. This requires defining the objective function of the problem, constraints, and the solution space of the problem. Secondly, the population must be initialized. A set of initial paths, or chromosomes, is randomly generated based on the problem’s characteristics. The fitness value of each path is then calculated to measure its superiority, typically based on factors such as path length and obstacle avoidance ability. Finally, a selection operation is performed. The path’s fitness value determines the probability of selecting it as the parent for generating the next generation of the population. The crossover operation generates new paths by crossing over chromosomes to increase population diversity and explore the search space. The mutation operation is also performed as follows: Mutate the chromosomes of the paths with a certain probability to introduce new path information and increase the diversity of the population, preventing it from falling into local optimal solutions. Repeat the selection, crossover, and mutation operations until the termination conditions are met. The traditional genetic algorithm is advantageous for coping with complex path-planning problems and has better global search capability and robustness. The language used should be clear, objective, and value-neutral, avoiding biased, emotional, figurative, or ornamental language. The text should adhere to style guides, use consistent citation, and follow a consistent footnote style and formatting features. Additionally, the language should be formal, avoiding contractions, colloquial words, informal expressions, and unnecessary jargon. Finally, the text should be free from grammatical errors, spelling mistakes, and punctuation errors. However, it also has drawbacks, such as low search efficiency and a tendency to fall into local optimal solutions. It is important to note that subjective evaluations should be excluded unless clearly marked as such. Therefore, when faced with different path-planning problems, it is necessary to select the appropriate optimization algorithm or enhance it by combining it with other heuristic algorithms based on the specific situation.
As of now, a considerable number of researchers worldwide have conducted extensive studies in this area. For example, Tian et al. [
4] proposed an approach based on improved adaptive genetic algorithms to address the problem of infeasible paths generated by genetic algorithms by ensuring feasible paths for population individuals after genetic operations using prior knowledge. Li et al. [
5], recognizing that the performance of basic genetic algorithms depends on factors such as the initial population quality, introduced an artificial bee colony algorithm to initialize the population and improve the population diversity. They then designed an adaptive strategy adjustment for crossover and mutation operators based on trigonometric functions to improve the convergence speed of the algorithm. Qiao et al. [
6] corrected non-smooth paths generated by genetic algorithm planning using an enhanced artificial potential field method, optimizing paths by adding nodes for smoother trajectories. Liu et al. [
7] addressed the problems of traditional mutation operators struggling to produce high-quality solutions and premature algorithm convergence by presenting a path fine-tuning algorithm and incorporating simulated annealing operations in each generation for local optimization. Xu et al. [
8] proposed a combination of immune operators to improve solution quality. Andranik S. Akopov et al. [
9] introduces a novel approach using parallel genetic algorithms for simulation-based optimization, addressing large-scale optimization problems characterized by wide feasible ranges and multiple local extrema. This methodology is successfully applied to minimize the potential number of traffic accidents in an Artificial Multi-Connected Road Network (AMCRN). R Zhou et al. [
10] proposed a genetic algorithm-based entry event and car following event framework to solve the problem of a lack of scenarios and low efficiency in virtual testing of CAV (Connected automated vehicles), which significantly improves accident scenarios and challenging scenarios and is more efficient. CHAYMAA et al. [
11] proposed a mobile robot path planning method based on improved crossover genetic algorithm in terms of efficiency and applicability of scene generation, which was improved in terms of distance and safety to find a feasible and efficient path between two points. Chen et al. [
12] proposed a genetic algorithm fused with the fireworks algorithm to compensate for the premature convergence disadvantage of genetic algorithms. Li et al. [
13] used the A-star algorithm as heuristic information in population initialization and added deletion sub-operators, etc., which reduced the number of iterations, but the computation time was long when faced with complex paths.
The study aims to address the limitations of traditional genetic algorithms (GA) in path planning. The proposed Improved Genetic Algorithm (IGA) incorporates the ant colony algorithm (ACO) to solve issues such as poor initialized path quality, premature convergence, long turn times and angles, easy deadlock, regional extremes, and too many redundant path nodes. First, a new population initialization method is proposed to improve the quality of random point generation by generating points near the start and end point connecting lines. This results in higher-quality random points and shorter generated paths. Additionally, an improved ant colony algorithm is used to efficiently connect these random points, resulting in a higher-quality population under this initialization strategy. During the selection phase, a combination of tournament and roulette wheel selection is utilized to prevent premature convergence of the population. Additionally, an elite retention strategy is employed to prevent the degradation of the algorithm. In the crossover phase, the algorithm introduces the concept of edit distance and proposes a two-layer crossover operation based on edit distance to prevent invalid crossover and expedite the evolution of the algorithm. During the mutation stage, a new method that incorporates the concept of simulated annealing is proposed to introduce novel mutations. Additionally, a three-stage mutation method is suggested to enhance population diversity and accelerate convergence to the optimal solution. To eliminate redundant nodes in the paths and expedite the algorithm’s convergence, a deletion operator is also included to remove non-essential paths. The fitness function introduces the constraint of path smoothness to guide the algorithm towards smoother paths. To balance population diversity and the algorithm’s ability to determine optimal individuals in later stages, we use adaptive crossover and mutation probability adjustment strategies. This ensures greater species diversity in the early stages and the faster determination of optimal individuals in later stages. The simulation results indicate that the improved algorithm is more effective than the traditional genetic algorithm. It can avoid local extremes better, converge faster, and generate higher quality and shorter length paths. Practical experiments confirmed the advantages of the improved algorithm in terms of algorithm convergence speed, path smoothness, and path length.
3. Genetic Algorithm Improvement and Implementation
3.1. Encoding Method
This article uses grid serial numbers to encode chromosomes; that is, each path of the AGV is composed of the grid serial numbers it passes through in the process of moving from S to G. The red path in
Figure 1 can be expressed as {S, 10, 21, 31, 42, 43, 44, 55, 66, 77, 78, 89, G}.
3.2. Population Initialization
In the existing literature, population initialization methods usually use random generation [
15]. This approach tends to generate a large number of infeasible paths in the initialized population, which affects the rate of population evolution and the quality of individual chromosomes [
15]. On this basis, this paper proposes a heuristic strategy based on the reference [
16], which integrates an improved ACO algorithm for generating initial populations, as shown in
Figure 2. The specific steps are summarized below:
- (1)
Determine the population size M by connecting the points S and G with a straight line, using S as the starting point and G as the destination point.
- (2)
Divide the line SG vertically by the dotted line into n − 2 parts.
- (3)
The point is randomly generated on the dashed line {i = 1, 2, …, n − 2} on the dashed line {i = 1, 2, …, n − 2}, the point is randomly generated, and the coordinates of are calculated as shown in Equation (2). If the generated is not in free space, a point in the neighborhood of in free space is regenerated to replace it.
In Equation (2), n represents the length of the chromosome, is the grid size, denotes the coordinates of the randomly generated point , and are the starting point S and the target point G of the AGV movement, respectively, and m_rand is a random number selected from {−m, 1 − m, ⋯, m − 1, m}.
- (4)
Connect the randomly generated points into a continuous path. Starting from point S, if there is no obstacle between neighboring nodes, connect them with a straight line and go to step six. If there are obstacles, go to the next step.
- (5)
Since the ACO algorithm is characterized by fast search speed and optimization on simple paths, this paper uses the ACO algorithm between two points with obstacles [
17]. The maximum value of horizontal distance D and vertical distance H is taken as a square to determine the area to be planned S. Assuming that the starting point is A and the ending point is B, the centroid coordinates of the area S are
. If the path generation fails because the area is too small and there is no path, the area of area S is increased by one unit in all directions, and then replanning is performed. In order to increase the initialization speed of the algorithm, the area of region S can only be increased twice. After two times, the area is set to fail to generate a feasible path, and random points A and B are regenerated for path planning. Consider
and
at the two ends of the obstacle as the start and end points, respectively. There are two key factors for the ants to move from the current position i to the next position j: probabilistic selection and pheromone updating. To further improve the convergence of the ACO algorithm and the quality of the initial population, the weighting factor
in the state transition probability formula is
where
denotes the pheromone concentration,
denotes the pheromone inspiration factor,
and
are
and
the weighting coefficients, respectively, and
are the nodes that the ants can pass through when moving.
In terms of pheromone updating, this paper takes the path length as the criterion for better convergence. If the path is smaller than the average path, it is rewarded, and vice versa. The specific equations are as follows:
where
denotes the pheromone volatilization system;
denotes the pheromone left between paths (i, j) in this iteration;
denotes the pheromone’s left by the ants on the path (i, j) at the moment t;
denotes the path length of the ants;
denotes the pheromone strength, which is generally a constant;
is the optimal path length;
is the average length of the paths in this iteration;
and is the weighting factor.
- (6)
Determine whether goal point G has been reached. If it has, go to the next step; if it has not, return to step 4.
- (7)
If there are duplicate paths in the generated individuals, delete them.
- (8)
If the population size equals M, stop; otherwise, repeat steps 3 to 7.
3.3. Adaptation Function
In genetic algorithms, the fitness function is used to evaluate the quality of chromosomes under predetermined conditions to determine the evolutionary direction of the population. Considering the stability and economy of AGVs in practical applications, the evaluation criteria for generating paths are path length and path smoothness. Therefore, the objective function of the feasible path can be expressed as follows:
In Equation (7), denotes the planned path and and are given by Equations (8) and (9), respectively.
- (1)
Path length:
In Equation (5), n is the number of nodes that generate the path, and and are the abscissa and ordinate of the ith node, respectively.
- (2)
Path smoothness:
In Equation (10), vector = (), vector = ().
From Equation (7), it can be seen that the evolutionary direction of the paths planned by the IGA is towards shorter paths and shorter number of turns and turning angles. Therefore, the fitness function is shown in Equation (11), and the fitness function value of the optimal path is the smallest.
In Equation (11), α and β are the weights of each part.
3.4. Improved Genetic Operations
3.4.1. Improve Selection Operations
Traditional genetic algorithms usually use the roulette wheel selection method in the selection phase, which may lead to premature convergence and local optimization of the algorithm [
18]. To address this phenomenon, this paper combines the improved roulette wheel selection method with the tournament selection method. To prevent population degradation, this paper also employs an elite preservation strategy.
The enhanced roulette wheel selection method follows a specific procedure. Individuals are sorted according to their fitness values from high to low and proportionally categorized into “high fitness individuals”, “medium fitness individuals”, and “low fitness individuals”. If the roulette wheel selects individuals with “high fitness individuals” or “medium fitness individuals”, they are selected accordingly. In contrast to the common method in the literature, where “low fitness individuals” are directly discarded during selection and the corresponding “high fitness individuals” are selected, a 2:1 ratio is used in this paper, i.e., if “low fitness individuals” are selected, they are reselected from the “high fitness individuals” and “medium fitness individuals”. This ensures species diversity and effectively avoids local optimization. In addition, tournament selection is introduced, where a number of individuals are selected directly using a tournament selection method. This further preserves species diversity while the algorithm converges. Finally, an elite preservation strategy was introduced to prevent species degradation.
3.4.2. Double Crossover Based on Edit Distance
As populations evolve, identical chromosomes become more common, and crossover operations on identical chromosomes are ineffective and slow down the evolution of populations. Therefore, a screening process based on edit distance is applied before performing crossover operations [
19]. If the edit distance between two chromosomes is too small, the crossover operation is not performed.
After the screening process, the chromosomes are sorted in ascending order based on their fitness values for single-point crossover. Specifically, two parent chromosomes are randomly selected from the population, and common genes (excluding S-points and G-points) are identified for crossover. If multiple common genes are found, any one of them is selected for crossover.
If no common genes can be found, the decision is made based on the sorting results. If the fitness value of the chromosome is in the top 60%, no crossover is performed to avoid destroying the best chromosomes and to prevent population degradation. If the fitness value of the chromosome is in the bottom 40%, a gene is randomly selected from each of the two parent chromosomes and a new "binding chromosome" is initialized between these two genes. The bonded chromosome is then used as an intermediary for the crossover operation. This enhanced crossover operation can result in pathway duplication. If a bonding chromosome is present during the crossover process, the duplicated pathway will be removed after the crossover operation.
Figure 3 below shows the flowchart of the crossover operation based on edit distance.
3.4.3. Simulated Annealing Three-Stage Mutation
The main purpose of the mutation operation is to improve the local searching ability of the algorithm so that the algorithm can rid the local optimum, ensure global exploration, maintain the diversity of the population, and prevent premature convergence [
20]. The traditional single-point random mutation operation often suffers from the problems of path blocking and weak local searching ability. In view of this, this paper proposes a three-step mutation strategy by combining the concept of simulated annealing on the basis of the literature [
21].
Figure 4 below illustrates the improved mutation operation.
Figure 5 illustrates the Improved Mutation Operation flowchart. The specific steps are as follows.
- (1)
Use to determine whether the current chromosome should be mutated. If > random number r, then mutation occurs.
- (2)
Randomly select two nodes and (excluding S and G points) from the parent individual to be mutated.
- (3)
These two random points divide the parent individual into three segments. Reinitializing these three segments results in a mutated chromosome.
After a trisection mutation, the newly acquired chromosome must be evaluated for acceptance based on the Metropolis guidelines. If the fitness value of the new chromosome is lower than that of the old chromosome, the new chromosome is accepted. Otherwise, the acceptance of the new chromosome depends on the Metropolis policy defined below.
In Equation (12), represents the fitness value of the new chromosome, represents the fitness value of the old chromosome, and T denotes the current temperature. The calculation formula for T is given by
In Equation (13), T0 represents the initial temperature, Q is the cooling rate, and Gen stands for the number of mutation occurrences.
3.4.4. Deletion Operator
After the mutation process, the paths generated may still require several iterations to achieve smoothing [
22]. Deletion operations are necessary to refine the genetic algorithm’s chromosome [
23], removing redundant nodes, speeding up population convergence, and improving algorithm efficiency. In the following example, the red dashed line represents the replanned path after the deletion operation.
- (1)
First, determine if there is an obstacle between the starting point S and
using the method described in [
23]. If there is no obstacle (see
Figure 6), connect the redundant nodes and delete them.
- (2)
Next, evaluate the connectivity between S and . As shown in the figure, there is an obstacle between these two points, so is retained.
- (3)
The process is repeated using as a new reference point and applying the rules from the first two steps, until the evaluation reaches the final node.
3.5. Adaptive Adjustment of Crossover and Mutation
The genetic algorithm’s convergence and optimal solution quality are affected by the crossover probability (
) and mutation probability (
). A small
can make it difficult to generate new individuals and result in a slow search process, while a large
can disrupt the genetic model. A small
can impede population renewal and hinder the generation of new individuals, while a large
can lead to a lack of direction in the search process and the loss of good individuals [
23]. An optimal balance between
and
is crucial for the success of the algorithm. To address this issue, an adaptive adjustment strategy is adopted in this paper [
24]. This strategy increases
and
in the early stages to enhance the algorithm’s searching ability and decreases them in the later stages to determine the optimal individuals. The formula used for this strategy is as follows:
In Equations (14) and (15), and represent the maximum and minimum values of the crossover probability, respectively, while and represent the maximum and minimum values of the mutation probability, respectively. denotes the maximum fitness value in the population, represents the average fitness value in the population, is the fitness value of the individual with higher fitness during crossover and is the fitness value of the individual involved in the mutation operation.
3.6. Improved Flowchart
Figure 7 below shows a flowchart of the improved genetic algorithm of this paper.