Next Article in Journal
Evaluation of LoRa Network Performance for Water Quality Monitoring Systems
Previous Article in Journal
Analysis of Surface Temperature Modified by Atypical Mobility in Mexican Coastal Cities with Warm Climates
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Ant Colony Algorithm with Deep Reinforcement Learning for the Robust Multiobjective AGV Routing Problem in Assembly Workshops

College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310014, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(16), 7135; https://doi.org/10.3390/app14167135
Submission received: 5 July 2024 / Revised: 1 August 2024 / Accepted: 12 August 2024 / Published: 14 August 2024

Abstract

:
Vehicle routing problems (VRPs) are challenging problems. Many variants of the VRP have been proposed. However, few studies on VRP have combined robustness and just-in-time (JIT) requirements with uncertainty. To solve the problem, this paper proposes the just-in-time-based robust multiobjective vehicle routing problem with time windows (JIT-RMOVRPTW) for the assembly workshop. Based on the conflict between uncertain time and JIT requirements, a JIT strategy was proposed. To measure the robustness of the solution, a metric was designed as the objective. Afterwards, a two-stage nondominated sorting ant colony algorithm with deep reinforcement learning (NSACOWDRL) was proposed. In stage I, ACO combines with NSGA-III to obtain the Pareto frontier. Based on the model, a pheromone update strategy and a transfer probability formula were designed. DDQN was introduced as a local search algorithm which trains networks through Pareto solutions to participate in probabilistic selection and nondominated sorting. In stage II, the Pareto frontier was quantified in feasibility by Monte Carlo simulation, and tested by diversity-robust selection based on uniformly distributed weights in the solution space to select robust Pareto solutions that take diversity into account. The effectiveness of NSACOWDRL was demonstrated through comparative experiments with other algorithms on instances. The impact of JIT strategy is analyzed and the effect of networks on the NSACOWDRL is further discussed.

1. Introduction

The vehicle routing problem (VRP) was proposed by Dantzig and Ramser in 1959 [1]; this problem aims to find an optimal route based on constraints and objectives. The problem is widely used in traffic management, logistics transportation, and other fields. This problem has been extensively studied. Several VRP variants have been proposed by applying the VRPs in different environments, such as the capacity-constrained vehicle routing problem (CVRP) and vehicle routing problem with time window (VRPTW). Many researchers have performed systematic literature reviews on the VRP.
Liu et al. [2] extracted data and analyzed relevant literature within 2018–2022 to address the problems associated with VRPTW. Li et al. [3] analyzed the articles which solve VRPs by Learning-Based Optimization (LBO) algorithms from various aspects, evaluated the performance of representative LBO algorithms, summarized the types of problems to which LBO is applicable, and proposed directions for improvement. Asghari et al. [4] investigated the contributions related to the green VRP and proposed a classification scheme based on the variants considered in the scientific literature, so as to provide a comprehensive and structured survey of the state of knowledge, as well as to discuss the most important features of the problems and to indicate future directions. Gunawan et al. [5] provided and categorized a list of VRP datasets. Salehi Sarbijan et al. [6] systematically reviewed and analyzed recent research on the VRP from 2001 to 2022, focusing on new and emerging topics in VRPs. Based on the basic VRP, Zhang et al. [7] classified VRPs according to their characteristics and practical applications, gave the unified description and mathematical model of each type of problem, and then analyzed the solution methods of each type of VRP. Ni et al. [8] comprehensively analyzed and summarized the literature related to VRPs from 1959 to 2022 by using the knowledge graph, and classified the VRP models and their solutions.
However, the complexity of the VRP increases with the additional constraints of real life. The VRP is NP-hard, so finding the optimal solution is difficult. Therefore, the method proposed to solve the VRP aims to find the approximate optimal solution via an efficient computational approach.
VRPs for AGVs are solved by using exact algorithms and intelligent evolutionary algorithms. The exact algorithms include the branch-and-bound method, integer linear programming method, and dynamic programming method, which are suitable for solving small-scale VRPs with simple structures. Soysal [9] et al. proposed a simulation-based restricted dynamic programming approach to solve the VRP associated with traffic emissions. Reihaneh [10] proposed a branch-and-price algorithm that solves the vehicle routing with demand allocation problem. However, the exact algorithm is prone to falling into a local optimal solution or taking too long to compute when solving large-scale VRPs.
In contrast, intelligent evolutionary algorithms can find satisfactory approximate optimal solutions in a limited time when solving large-scale VRPs. Therefore, many experts have improved and designed intelligent evolutionary algorithms. Yong Wang [11] et al. proposed an improved multiobjective genetic algorithm with tabu search (IMOGA-TS), which combines local and global searches with the improvement of solutions at each iteration by TS to solve a collaborative multidepot EV routing problem with time windows and shared charging stations (CMEVRPTW-SCS). Pierre [12] proposed a multiobjective genetic algorithm with stochastic partially optimized cyclic shift crossover to solve the multiobjective VRPTW.
The multiobjective model is always transformed into a single-objective model by weighting or other conversion methods. In the VRP, the total objective is often composed of multiple objectives with mutual exclusivity, which means that improving the performance of one objective may lead to degradation of the performance of other objectives. These objectives may not be linearly related and cannot be transformed into a single-objective problem with weights. Therefore, there is a limitation in using a single-objective model to solve the VRP.
Multiobjective optimization problems (MOPs) are proposed for optimizing multiple objectives which are always mutually exclusive at the same time. In recent decades, researchers have proposed many methods for solving MOPs. Multiobjective evolutionary algorithms (MOEAs) are the mainstream methods for solving MOPs. Jinlong Wang [13] proposed an enhanced algorithm based on SPEA2 (ESPEA) to solve the pickup vehicle scheduling problem with mixed steel storage (PVSP-MSS) which optimizes the makespan of pickup vehicles and the makespan of steel logistics parks at the same time. Yadian Geng [14] proposed an improved hyperplane-assisted evolutionary algorithm (IhpaEA) to solve the distributed hybrid flow shop scheduling problem (DHFS), which takes maximum completion time and energy consumption as objectives.
Based on different algorithmic strategies, most MOEAs can be classified into the following categories: (1) based on their dominant relationship, such as the NSGA-II [15], SPEA [1], and PESA-II [16]; (2) based on decomposition, such as the MOEA/D [17], MOEA/D-M2M [18], and RVEA [19]; and (3) based on performance evaluation indicators, such as the IBEA [20], SMSEMOA [21], and DNMOEA/HI [22].
From the above algorithms, Pareto solutions can be obtained. However, for MOPs, the number of Pareto solutions increases as the number of objectives increases. For practical scenarios, not all solutions make sense, especially for uncertain time-based VRPs. Uncertain times require Pareto solutions that are most resistant to uncertain times compared to deterministic times, and uncertainty in VRPs is rarely considered in these algorithms. To solve this problem, robust optimization has been proposed, where the researcher searches for robust solutions with the strongest resistance to interference. Robust optimization [23] means that the solution and performance results remain relatively unchanged when exposed to uncertain conditions. The solution is usually evaluated using the most unfavorable uncertainty. Xia [24] et al. proposed a new method that sequentially approaches the robust Pareto front (SARPF) from nominal Pareto points to solve MOPs with uncertainties. Jin et al. [25] introduced and discussed existing methods for dealing with different uncertainties and studied the relationships between different categories of uncertainty. Scheffermann [26] et al. introduced and compared the NSGA-II with an improved predator–prey algorithm for the VRPTW with uncertain travel times. He [27] et al. used a robust multiobjective optimization evolutionary algorithm (RMOEA) to solve robust multiobjective optimization problems (RMOPs), which consists of two parts: multiobjective optimization using the improved NSGA-II and robust optimization to search for a robust optimal frontier. The robust optimization in this paper is based on Monte Carlo simulation to simulate the practical environment in which the results are approached to the practical situation through a large number of simulation experiments. The number of feasible times of Monte Carlo simulation under the disturbance of uncertainty coefficients is taken as the criterion for assessing the robustness of the solution.
In the VRP, the AGV operation time, total travel distance of the AGVs or number of AGVs are often used as objectives. However, the just-in-time (JIT) of the material distribution is often ignored. In addition, the literature discusses the robustness of VRPs, but few papers have combined robustness with JIT problems. JITs pursue just-in-time arrivals, and uncertain times can seriously affect the realization of JIT, so combining JITs with robust optimization can improve the possibility of realizing JITs.
To solve the above issues, in this paper, the multiobjective robust VRP model that simultaneously considers uncertainty and JIT strategy is proposed. In addition, an improved ACO algorithm with deep reinforcement learning for assembly workshops is proposed. This study aims to make the following contributions.
(1) A just-in-time-based multiobjective robust vehicle routing problem with time windows (JIT-RMOVRPTW) is constructed which simultaneously optimizes three objectives. The model takes into account the uncertain time by introducing uncertainty coefficients. By defining the robustness metric that combines uncertain time to measure the robustness of the solution, the robustness optimization is defined as a dedicated optimization objective that can filter robust solutions in a better way.
(2) The JIT-RMOVRPTW considers the two conflicting goals of JIT and robustness at the same time. In this model, the JIT strategy under uncertainty is proposed. The traditional time window is eliminated, and the JIT time, which is used as the deadline for the workstation, is proposed. The model eliminates the fixed departure time of AGVs, so that AGVs can flexibly adjust the depot time according to the JIT time to eliminate the waiting time at the workstation generated by early arrival of AGVs.
(3) A two-stage nondominated sorting ant colony algorithm with deep reinforcement learning (NSACOWDRL) is proposed to improve the robustness of the solutions by double screening. The NSACOWDRL divides the problem into two stages: a multiobjective optimization problem and a robust optimization problem. In the first stage, an initial search is performed with robustness metrics as one of the objectives. The small habitat conservation strategy based on reference points used in NSGA-III [28] is introduced in routing selection. The nondominant rank, the elite path strategy, and the max-min pheromone strategy are introduced in the pheromone update strategy. Furthermore, DDQN is introduced as a local search algorithm. The network is trained by the Pareto solutions which guide the learning direction of the network, after which the trained network is used to participate in nondominated sorting. Moreover, a probabilistic formula based on the network and objectives is designed. In the second stage, the feasibility of the solutions is quantified through Monte Carlo simulation, then the solutions are partitioned according to the uniformly distributed weights in the solution space. Each partitioned solution set is screened separately for robustness. The set of screened solutions is the robust Pareto frontier that takes into account the diversity of solutions.
(4) The performance of the proposed algorithm is validated by comparative experiments. The paper also further discusses the effect of network structure on the performance of the algorithm.
The remainder of this paper is organized as follows: In Section II, the background is introduced. Section III introduces the related VRP problem in the assembly workshop. In Section IV, the NSACOWDRL is described in detail. The experimental design and analysis are described in Section V. Finally, conclusions and suggestions for future work are given in Section VI.

2. Background

2.1. Vehicle Routing Problem

With further study of these elements, many VRP models, such as the green VRP (GVRP) and the dynamic VRP (DVRP), have been proposed and investigated. Moreover, depending on the objectives of the VRP, the problems can be categorized into single-objective VRPs and multiobjective VRPs.
Many models and algorithms have been proposed for single-objective VRP. Kaijun [29] et al. constructed a new VRPTW model of multiple distribution centers based on urban rail transportation, and a novel concentration-immune algorithm particle swarm optimization (C-IAPSO). Zhang [30] et al. proposed and established a reverse logistics VRP with backhauls (RL-VRPB) for waste packaging recycling route planning. An improved scatter search algorithm (ISS) was proposed, which introduces vehicle residual space recovery capability and the local search strategy. To solve the DVRP, Sabar [31] et al. proposed a population-based iterated local search (ILS) algorithm that integrates skewed variable neighborhood descent (SVND), a quality-and-diversity updating strategy and an adaptive multioperator perturbation procedure. Feng [32] et al. proposed a new algorithm by transferring knowledge from useful customer representations that are captured from past solved VRPs. Li [33] et al. utilized a novel neural network integrated with a heterogeneous attention mechanism to solve the pickup and delivery problem in the VRP problem. Jia [34] et al. proposed a novel bilevel ACO algorithm to solve the capacitated EV routing problem (CEVRP). The algorithm divides the problem into two subproblems: first, an order-first split-second max-min ant system algorithm is proposed to solve problems which ignore the electricity constraint. Second, a new removal heuristic combining a restricted enumeration method is designed to generate the charging schedule in the generated routes to satisfy the electricity constraint.
However, these VRP algorithms do not consider the multiobjective characteristics of real problems and lack multidimensional consideration of real problems. Therefore, multiobjective VRPs as well as many variants of multiobjective algorithms have been proposed. Nayera Elgharably [35] et al. proposed a new hybrid search genetic algorithm that combines the SPEA with a resultant local search heuristic to solve the stochastic GVRP, which simultaneously considers economic, environmental, and social aspects. Arash Motaghedi-Larijani [36] proposed four metaheuristic algorithms that involve combining different multiobjective algorithms to solve routing and scheduling vehicles on a cross-dock, minimizing the total shipping and tardiness costs and minimizing the number of outbound open doors as the objectives. Nan Yin [37] proposed M-NSGA-II, which combines the multifactorial evolutionary algorithm (MFEA) with NSGA-II to solve the VRP in low-carbon intelligent transportation. This approach considers carbon emissions, the number of vehicles, and distribution cost funding. Mazdarani [38] proposed a hybrid multiobjective genetic algorithm to solve the bi-objective overlapped links VRP (OLVRP) for valuable goods transportation, which takes the total cost and risk of routing as objectives. Wang et al. [39] established a five-objective multidepot VRP with time windows (MDVRPTW) and proposed a two-stage MOEA to solve the problem. In stage I, the algorithm focuses on finding extreme solutions and forms a coarse Pareto front. Stage II extends the found solutions for approximating the whole Pareto front. The two-stage strategy provides a new method for balancing convergence and diversity when solving MOPs.
Nevertheless, these multiobjective algorithms seldom consider the uncertainty of the real problem, which is an unavoidable problem in practical applications. Uncertainty disrupts problem-solving significantly. With the study of uncertainty problems, researchers have turned to incorporating robustness into VRPs that are insensitive to disturbances and can maintain the optimization performance at an acceptable level under disturbances. Robustness means maintaining feasibility as well as maintaining the best possible performance under uncertainty constraints. The goal of robust optimization is to make full use of the search space to find the best optimal solution while minimizing the effect of uncertainty.
Deb et al. [40] proposed a novel, robust, two-stage planning model for charging station placement that is determined by fuzzy inference considering distance, road traffic, and grid stability. A hybrid algorithm combining chicken swarm optimization and the teaching-learning-based optimization (CSO TLBO) algorithm was proposed to obtain the Pareto front, and fuzzy decision-making was used to compare the Pareto optimal solutions. Muñoz [41] et al. proposed a novel strategy to solve the VRPTW with uncertain demands using adaptive credibility thresholds, which try to find good solutions with relaxed capacity constraints. For this purpose, a new split decodification algorithm that combines a genetic algorithm with local search strategies was designed. Wang [42] et al. proposed a robust routing optimization model for multivehicle electric delivery that considers random changes in demand and transportation time at different customer points to minimize the distribution cost and carbon emissions; additionally, an improved genetic algorithm was proposed to solve this problem. Jiahui Duan [43] et al. designed a new form of disturbance on travel time that is determined by the maximum disturbance degree and constructed a robust multiobjective VRPTW, which adopts two conflicting objectives to capture the uncertainty characteristics. Moreover, a robust multiobjective particle swarm optimization approach that adopts a coding method involving priority assignment and decoding based on a greedy strategy is proposed. A new metric, Rob, was designed to measure the robustness of the solutions. To take full advantage of the problem characteristics, the algorithm adopts problem-based and route-based local searches.
In the current literature, robustness methods rarely take into account the JIT objective of the VRP, and there are fewer studies on the combination of robust VRPs with MOPs. In this paper, the solutions obtained by the algorithm are subjected to an uncertainty test. Robust solutions that take into account diversity are obtained based on the distribution of the solutions in the solution space.

2.2. Ant Colony Algorithm

The ant colony algorithm (ACO) [44] is a widely studied intelligent evolutionary algorithm. The ACO algorithm consists of two main steps: calculation of the state transfer probabilities and updating of the routing pheromone concentrations:
(1) The node transfer probabilities:
When the ant selects the next transfer node, it selects nodes by a roulette wheel after calculating the transfer probability of each node. To prevent ants from repeatedly selecting nodes, the selected node is added to the tabu list.
(2) Updating of routing pheromone concentrations:
The ACO algorithm mimics the behavior of ants that leave pheromones on each route on which they walk and introduces a pheromone volatilization mechanism to prevent the algorithm from falling into local optima. After all the ants have explored all the nodes, the pheromone concentration of each route is updated according to the pheromone update strategy.
The ACO algorithm is used in various fields, such as assignment problems, scheduling problems, and resource allocation. Fengyu Chen [45] et al. proposed the product ant colony optimization (PACO) algorithm which combines the ACO algorithm with production products to improve the productivity of textile manufacturers and balance the total operating time of all machines in each process. Alwin M. [46] et al. combined the binary ant colony algorithm (BACA) with other search optimization algorithms, such as hill climbing (HC) and simulated annealing (SA) to ensure the optimal Wireless Sensor Networks (WSNs) Coverage.
The ACO algorithm has good performance and effectiveness in solving VRPs. Huang [47] et al. employed an ACO to solve the feeder vehicle routing problem (FVRP). Li [48] et al. applied an improved ACO which used an innovative approach in updating the pheromone to solve the multidepot GVRP (MDGVRP). Current research on the improvement of the ACO for VRPs has focused on the state transfer strategy, pheromone update strategy, introduction of heuristic rules, individual learning strategy, and integration with other algorithms. Jiao [49] et al. proposed a polymorphic ACO with the adaptive state transfer strategy and the adaptive information updating strategy to improve the global search capability of the algorithm. Teng Ren [50] proposed a new improved ACO to solve the VRP with split pickup and delivery of multicategory goods. The algorithm adopts a TS operator that contains five neighborhood operators to improve the local search ability and introduces SA mechanisms to update the global pheromones. Luo [51] et al. proposed an improved ACO which adopted a pseudo-random state transition rule and avoided the blindness search at early planning through the unequal allocation initial pheromone. Li [52] proposed the multiobjective ant colony system algorithm for epidemic situations (MOACS4E) to solve a new multiobjective VRP model for epidemic situations that considers the traditional travel cost and the prevention cost of the VRP in epidemic situations. MOACS4E adopted two ant colonies to optimize the problem and proposed a new pheromone fusion-based solution generation method. In addition to this, algorithms such as the simulated annealing algorithm [53] and the artificial fish swarm algorithm [54] were used to combine with the ACO.
Deep reinforcement learning (DRL) is currently one of the important research directions in the field of machine learning. Common reinforcement learning algorithms are Deep Q-Networks (DQNs), Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). DRL can realize end-to-end learning by directly interacting with the environment. With excellent feature extraction ability to realize the migration between different tasks, it can effectively overcome various perturbations in the system, as well as have good ability to solve high-dimensional and large-scale problems. However, it has the problems of insufficient ability to explore the environment, poor robustness, and susceptibility to deceptive gradients caused by deceptive rewards; in addition, the training process of DRL is quite complicated in the face of real problems. Evolutionary algorithms have the advantages of better global search capability, good robustness, and parallelism, but it is difficult to carry out migration learning in the face of perturbations. Overcoming the shortcomings of each by combining reinforcement learning and evolutionary algorithms to achieve further optimization of the algorithms is one of the current research directions. NICOLÁS FRÍAS [55] proposed the four innovative hybrid algorithms which combine Machine Learning (ML) clustering techniques with metaheuristic approaches inspired by an ACO to solve the energy minimizing VRP (EMVRP).
Neural networks are good at learning but not at searching, while evolutionary algorithms are good at searching but not at learning. Evolutionary algorithms combined with DRL can be used to make up for the lack of DRL. Thus, in this paper, an ACO is combined with a Double DQN (DDQN). A DDQN is introduced to overcome the problem that the ACO is prone to fall into local optimization to improve the optimization ability of the ACO; meanwhile, the good searching ability of the ACO on the VRP is utilized to guide the DDQN for learning and searching.

3. Problem Description and Mathematical Models

3.1. Problem Description

In this paper, the multiobjective multi-AGV routing planning mathematical model JIT-RMOVRPTW was constructed. The VRP in the assembly workshop is described as follows:
i , j indicates the identification of workstations; i , j V , V = 0,1 , , n represents the set of workstations; and when i or j is 0, it represents the raw material library. There are K AGVs with the same performance in the raw material library. Each of them has a maximum load capacity of Q and an average travel speed of v . The maximum travel distance of the AGV is unlimited. Materials are delivered by material carts, which are pulled by the tail hook of the AGV. Each AGV can transport up to R carts. The cart has a maximum capacity of Q r . There are N workstations in the assembly workshop, and a total of M materials are needed. The correspondence between workstations and materials is known, the information about the materials is known, and q i is the total number of material boxes required for the workstation i . The position of each workstation is known, and d i j is the distance between stations. AGVs are dispatched to deliver M materials to N workstations. In AGV delivery, disturbed by random events such as traffic jams, the transportation time is uncertain, and the disturbance time is defined as t i ~ . In the JIT-RMOVRPTW, the traditional time window [ a i , b i ] is canceled, and materials need to be delivered on time by the deadline J I T i . The materials of each workstation require s i to be unloaded. s i is the unloading time, determined by the number of boxes of materials to be delivered to the station.
The VRP problem in the assembly workshop is based on the following assumptions:
(1) There is only one raw material library in the entire workshop, and the raw material library is able to meet the material requirements of all the workstations.
(2) The material demand at each station is less than the load capacity of the AGV, and the unloading time of the AGV at each station is known.
(3) All the AGVs depart from the same raw material library and must return to the raw material library after unloading the transported material.
(4) AGV acceleration and deceleration, charging, etc., are not considered during vehicle operation.
(5) Materials are placed in standard material boxes, and the material requirements of each workstation cannot be split. The AGV transports materials through the material carts. The boxes of the materials required for the same workstation are loaded in the same material cart, which is not mixed with other workstation materials.

3.2. Mathematical Model

Based on the traditional VRPTW, the JIT-RMOVRPTW adds uncertain time constraints and designs a new metric alpha based on uncertain time as an objective to measure the robustness of the solution. In addition, the JIT-RMOVRPTW considers JIT delivery. JIT time is used as the deadline, and the AGV adjusts the departure time according to the JIT time of the workstation to achieve the objective of the JIT.
The JIT-RMOVRPTW can be formulated as follows:
The three objectives for the JIT-RMOVRPTW, each of which needs to be minimized, can be stated as follows:
f d (Total travel distance):
min f d   = k = 1 K i = 0 N j = 0 N d i j x i j k
f a l p h a (Robustness metrics):
min f a l p h a   = 1 i = 1 N a l p h a i
f J I T (JIT penalty time):
min f J I T = k = 1 K w k
f (Overall objectives):
min f   = f d , f a l p h a , f J I T
The constraints associated with the JIT-RMOVRPTW are added as follows:
  • Capacity constraints: The number of material boxes on each workstation must not exceed the maximum capacity of the material cart. The number of material boxes delivered by each AGV should not exceed the maximum load capacity of the AGV. E i indicates the kinds of materials required at station i . q i e indicates the number of material boxes of material e for workstation i .
    q i = e = 1 E i q i e , i = 1,2 , , n
    q i Q r
    i = 0 N q i y i k Q , k = 1,2 , , m
  • Constraints on the number of material carts: The number of delivery workstations for each AGV must not exceed the number of carts for its own material. The number of material carts of each AGV must not exceed the maximum number of material carts of the AGV. R k indicates the number of material carts of the k t h AGV.
    i = 0 N y i k R k R , k = 1,2 , , m
  • Service time: The service time for each station is determined by the number of material boxes to be delivered to that station. T indicates the time taken to unload a material box.
    s i   = T q i , ( i = 1,2 , , n ) 0 , ( i = 0 )
  • Distribution constraints: Each workstation can be delivered only once.
    k = 1 K y i k = 1 , i = 1,2 , , n
  • AGV arrival number constraints: Only one AGV arrives at each workstation.
    i = 0 n x i j k   = y j k , j = 1,2 , , n ;   k = 1,2 , , m
    j = 0 n x i j k     = y i k , i = 1,2 , , n ;   k = 1,2 , , m
  • Material requirement constraints: Material requirements are met at each workstation.
    k = 1 K i = 0 N x i j k = 1 j = 1,2 , , n
  • Loop constraints: Each subloop starts at the raw material library and ends at the raw material library.
    k = 1 K j = 1 N x 0 j k = k = 1 K i = 1 N x i 0 k
  • Workstation demand time: The JIT problem contradicts the robust problem, so the lead time is used as the safety time for the vehicle. JIT-RMOVRPTW eliminates the traditional time windows, with J I T i serving as the deadline for the JIT problem (as shown in Figure 1). p is the slack time. As shown in Equation (17), for the station i , the larger the q i , the higher the JIT requirement.
    b i = min { b i e }
    a i = max { a i e } , ( a i e b i )
    J I T i = b i b i a i p 1 q i Q
  • The arrival time of the AGVs:
    t j = t i + d i j v x i j k + s i , i = 0,1 , 2 , , n ;   j = 0,1 , 2 , , n
  • Uncertainty constraints: t i ~ is the uncertainty time of arriving at station i . ζ is the overall random disturbance. D ¯ is the average distance between workstations. u i j ~ is the characteristic coefficient between stations i and j . u 0 i ~ has unique characteristics, so it is discussed separately in Equation (19).
    u i j ~ = d i j / D ¯ , ( i = 1,2 , , n ;   j = 1,2 , , n ) 1 , ( i = 0 ;   j = 1,2 , , n )
    t j ~ = t j + u i j ~ ζ ¯ , ζ ¯ = ζ n
  • Feasibility of workstations: To measure the robustness of the solution, the metric alpha is proposed in JIT-RMOVRPTW. Alpha is specifically defined according to different random distributions that different scenarios have. In this paper, ζ is taken as a continuous probability distribution of a triangular distribution with a lower limit of m i n , a peak location of m i d , and an upper limit of m a x , which can be expressed as [ m i n , m a x , m i d ]. Equation (21) represents the range of random arrival times of the station j . Equation (22) represents the degree of uncertainty from station i to station j .
    t j ~ = t i ~ + d i j v x i j k + s i + u i j ζ ,   i = 0 , 1 , 2 , ,   n ;   j = 0 , 1 , 2 , ,   n
    i n t e r v a l i j ~ = max u i j ~ ζ ¯ min u i j ~ ζ ¯  
a l p h a i represents the feasibility of station i . The value of a l p h a i is shown by Equation (23). The maximum value of alpha is set to 2 to prevent too large an alpha from affecting the realization of the JIT.
a l p h a i = 0 ;   m i n ( t i ~ ) > b i b i max t i ~ min u i j ~ ζ ¯ 2 i n t e r v a l i j ~ mid u i j ~ ζ ¯ min u i j ~ ζ ¯ ;   m a x ( t i ~ ) > b i   a n d   m i n ( t i ~ ) < b i   a n d   b i max t i ~ m i d ( u i j ~ ζ ¯ )   1 max u i j ~ ζ ¯ b i max t i ~ 2 i n t e r v a l i j ~ max u i j ~ ζ ¯ mid u i j ~ ζ ¯   ;   m a x ( t i ~ ) > b i   a n d   m i n ( t i ~ ) < b i   a n d   b i max t i ~ > m i d ( u i j ~ ζ ¯ )   b i max t i ~ i n t e r v a l i j ~ ;   m a x ( t i ~ ) < b i   a n d   b i m a x ( t i ~ ) i n t e r v a l i j ~ 2 2 ;   b i m a x ( t i ~ ) i n t e r v a l i j ~ > 2
12.
Time window constraints:
t i J I T i , i = 1,2 , , n
13.
Advanced time for materials:
t i J I T i , i = 1,2 , , n w i e = J I T i e t i i = 1,2 , n ;   e = 1,2 , , E i
14.
Penalty time: w k m i n indicates the minimum advance time of the material required for station i , which is delivered by the k t h AGV. w k indicates the total penalty time of each AGV. Equation (28) indicates that the departure time from the raw library of the k t h AGV is delayed for w k m i n to satisfy the smallest demand time of the material required for workstation i delivered by the k t h AGV in time. N k indicates the set of stations distributed by the k t h AGV.
w i = e = 1 E i w i e i = 1,2 , n # 26
w k m i n = min w i e , i N k , e E i
w k = i = 1 N k w i w k m i n
15.
Decision-making variables:
x i j k = 1 ,   k t h   A G V i s   r e s p o n s i b l e   f o r   t h e   d i s t r i b u t i o n   f r o m   w o r k s t a t i o n   i   t o   j 0 ,   o t h e r
y i k = 1 , D e l i v e r y   s e r v i c e   f o r   s t a t i o n   i   i s   c o m p l e t e d   b y   t h e   k t h   v e h i c l e 0 , o t h e r
Thus, the JIT-RMOVRPTW can be summarized by three objectives defined by Equations (1)–(4), subject to constraint conditions defined by Equations (5)–(30).

4. Proposed Algorithm

In this section, the overall NSACOWDRL process is first described. Then, the six components of NSACOWDRL are introduced, which include (1) solution construction, (2) nondominated sorting based on reference points, (3) the pheromone updating strategy, (4) local search based on the DDQN, (5) the probabilistic transfer formula, and (6) the robust optimization method. The optimization process of the NSACOWDRL is given in Figure 2.

4.1. Framework of the NSACOWDRL

The framework of the NSACOWDRL is given in Figure 3. The NSACOWDRL consists of two stages: fully searching the solution space to generate Pareto solutions and robust selection of Pareto solutions.
(1) Stage 1: Solution Space Characteristic Exploration and Pareto Solution Generation
Step 1: Initialize the algorithm parameters, including pheromone matrix, distance matrix, neural networks, etc. The initial number of iterations is 0.
Step 2: Obtain the set of feasible routes according to the solution construction strategy (see Section 4.2 for details).
Step 3: The routings are stratified by nondominated sorting based on the reference points, and N routings are selected for this iteration.
Step 4: The pheromone matrix is updated according to the pheromone update strategy.
Step 5: When the iteration is a specific value, the network of the DDQN is trained through the nondominated solutions, and the N paths are obtained by the trained network to participate in the next iteration through the selection of the actions.
Step 6: The number of iterations plus 1.
Step 7: If the maximum number of iterations is reached, output the solution set, otherwise skip to step 2.
(2) Stage 2: Robust Multiobjective Optimization
Step 1: The Pareto solutions obtained in stage 1 are partitioned by weights that are uniformly distributed in the solution space.
Step 2: Each solution is assigned to the weight closest to this solution.
Step 3: Through Monte Carlo simulation, the Monte Carlo feasible times for each solution is obtained to quantify the feasibility of the solution.
Step 4: The set of solutions belonging to each weight is then selected according to feasibility, respectively, to obtain the set of Pareto solutions for each weight. Robust solutions that take diversity into account are obtained.

4.2. Solution Construction

NSACOWDRL constructs solutions by imitating the behavior of ant colonies foraging for food with the following steps.
Step 1: There are m ants in the raw material bank. Record the number of ants as 0.
Step 2: Each ant sets out independently.
Step 3: The ant calculates the workstations that satisfy the constraints based on the current information and obtains the set of feasible workstations.
Step 4: If the feasible set is empty and the ant does not visit all the workstations, the ant returns to the raw material library and sets out again. Otherwise, calculate the node transfer probability according to the transfer probability formula and select the next node from the feasible set via roulette. Then, update the feasible set.
Step 5: Repeat Steps 3–4 until the ant distributes all the empty workstations, and record the route of the ant. The number of ants plus 1.
Step 6: Repeat Steps 2–5 until the number of ants is m . The feasible routes are obtained.
The pseudo-code for solution construction is as follows (Algorithm 1):
Algorithm 1: Solution Construction
Input:
V = 0,1 , , n : s e t   o f   w o r k s t a t i o n s ; d i j : t h e   d i s t a n c e   b e t w e e n   s t a t i o n s ;  
Q : t h e   m a x i m u m   l o a d   c a p a c i t y ; m : t h e   n u m b e r   o f   a n t s ;

K = 0 ;
While  K < m
 Set of undistributed workstations: S = {1,2,…,n}; depart from the raw materials warehouse: i ←0.
r o u t e = ;
  While  S ! =
   calculates the workstations that satisfy the constraints to obtains the set of feasible workstations, named A .
   if  A ! =
     calculate   the   node   transfer   probability   according   to   the   transfer   probability   formula   and   select   the   next   node   j   from   A via roulette.
     S = S -[ j ]; A = A -[ j ]; i j ; Update A . Append i to r o u t e ( K + 1 )
   else
    Return to raw material library: j = 0; Update A
   end
  end
K = K + 1 ;
end
Output:
r o u t e : t h e   s e t   o f   f e a s i b l e   r o u t e s ;

4.3. Nondominated Sorting Based on Reference Points

NSACOWDRL uses nondominated sorting based on reference points to obtain Pareto solutions with good diversity. Nondominated routing indicates that each of the routes is better than the others for some objectives. Searching for nondominated routing can expand the search range in the solution space, which can increase the probability of selecting better routing and improve the diversity of the algorithm.
In the NSACOWDRL, during the selection of half of the individuals in each generation, the paths obtained from the two iterations before and after are selected in a nondominated way to obtain more elite generation in such a way that it is elitist for high-quality individuals and at the same time, it will sufficiently bias in favor of the individuals with better nondominated ranks.

4.3.1. Nondominated Sorting

After one iteration of the NSACOWDRL, a set of routings can be obtained. To improve the diversity of routings, a set of feasible routings named Q u r is obtained by mixing the routings of the current generation with the routings of the previous generation. Then, Q u r is stratified by nondominated sorting via the following steps:
First, n i represents the number of individuals who dominate individual i . All individuals in Q u r with n i = 0 are found by a two-by-two comparison. They are given the nondominance rank 1. These individuals are stored in the set R 1 ;
Then, find the individuals that are dominated by the individuals in R 1 and store them in the set S 1 . Subtract 1 from n j for each individual in S 1 . If ( n j − 1) equals 0, individual j is given nondominance rank 2 and is stored in the set R 2 .
The above operation is repeated for the remaining individuals in Q u r until all individuals have been assigned a nondominance rank representing the importance of the routing.

4.3.2. Diversity Conservation

In the Q u r containing a large number of solutions, there will be many ranks, which leads to the existence of individuals with nondominated ranks that are too low and which are less important. These individuals not only reduce the running efficiency of the algorithm but also cause pheromone concentration changes in the poorer individuals to be recorded, thus increasing the search difficulty of the algorithm or even leading to a fall into a local optimum.
To eliminate this difficulty, a routing selection strategy based on reference points is adopted. The solutions in Q u r are selected according to their impact on the diversity of the algorithm. The uniformly distributed reference points are generated through the hyperplane. The individual is selected based on the association between the solutions and reference points. This strategy uses well-distributed reference points to maintain the diversity of populations. The steps are described below:
(1) The individuals starting from nondominance rank 1 are added to set D sequentially until the size of D is larger than N m a x after adding the individuals with nondominance rank L to D . N m a x is the maximum number of nondominated solutions. The individuals with nondominated ranks from 1 to (L-1) are added to set G . Individuals with nondominant rank L are added to set G * .
(2) Establish reference points on the hyperplane:
The hyperplane has the same tilt for all coordinate axes and an intercept of one on each axis. For the problem with M objectives, reference points are established on an (M-1)-dimensional hyperplane. For the three-objective problem, to generate reference points, each coordinate axis is uniformly segmented into H segments. The plane perpendicular to the coordinate axis is established through the segmented points of the coordinate axis The reference point is the intersection of planes from different coordinate axes with the hyperplane. The number of reference points is determined by Equation (31). Figure 4 shows the hyperplane of three objectives and reference points on the hyperplane when H is set to 5 (a detailed discussion of the reference points is given in Section 5.4).
P = C M + H 1 H
(3) Normalize the individuals:
First, calculate the ideal point z m i n , which is the point consisting of the minimum value of all individuals in set D on each objective. All individuals in set D are subtracted from the ideal point.
Then, calculate the extreme points, which are the points with large values in one objective and small values in other objectives. M objectives have M extreme points. To find the extreme point on the i t h objective, the value of the i t h objective of all individuals is divided by a factor of 1, and the other objective values are divided by a factor of 10e−6. Then, excluding the i t h objective value, the largest value of each individual is found. Comparing these values, the point corresponding to the smallest of these values is the extreme value point on the i t h objective.
The (M-1) dimensional hyperplane is formed using M extreme points. Then, the intercept between the hyperplane and the coordinate axes is calculated. Figure 5 shows the two-dimensional hyperplane formed using three extreme points. z m a x i represents the extreme point of the i t h objective. a i represents the intercept of the i t h coordinate axis.
Finally, the normalized objective value is calculated by Equation (32)
f i n x = f i x z i m i n a i z i m i n ,   i = 1,2 , , M
where f i x represents the value of the individual in the i t h objective. z i m i n represents the value of the ideal point in the i t h objective.
(4) Link individuals to reference points:
The reference point is connected to the origin of the coordinates to form the connecting line. The vertical distance from each individual in set D to each connecting line is calculated. The individual is then associated with the reference line closest to it. Each individual will correspond to a reference point. P j represents the number of individuals in set G associated with reference point j.
(5) Select individuals:
The individual selection operation is executed according to the number P j from smallest to largest. Starting from the reference point with the smallest P j and if more than one reference point has the smallest P j , choose a reference point randomly among these reference points. If P j = 0 and there are individuals associated with the selected reference point in the set G , the individual with the smallest distance from the reference point is selected and added to G . If there is no individual associated with it in G * , this reference point is no longer considered in the rest of the operations. If P j ≥ 1 and there are individuals associated with this reference point in G * , an associated individual is randomly selected to join G . The above operation is repeated until the size of G equals N m a x .
This method involves a small habitat protection operation. The method preserves the diversity of the algorithm by selecting solutions within the region where the solution’s distribution of the previous Pareto front is sparser. This selection operation uses hyperplane generation reference points and can be easily generalized to high-dimensional multiobjective problems to solve the difficulty of maintaining the diversity of solution sets in high-dimensional multiobjective problems.

4.4. Pheromone Update Strategy

The pheromone update strategy guides the algorithm update through a positive feedback mechanism; however, if the positive feedback mechanism is too strong, local convergence problems can easily occur, and if the mechanism is too weak, the algorithm will converge more slowly. To improve the convergence and diversity of the algorithms, this paper proposes a pheromone update strategy that combines the nondominated ranks with the elite routing strategy and the max-min pheromone update strategy. In this strategy, the pheromones of the routes are updated according to their nondominated rank, which comprehensively considers the influence of multiple objectives. Then, the elite paths with better nondominated ranks are updated twice to improve the convergence of the algorithm. Finally, to avoid the algorithm falling into a local optimum due to an excessively large difference between pheromone concentrations, the max-min pheromone strategy is adopted to limit the range of pheromone concentrations.
The update strategy is as follows:
(1) The pheromone update process is shown in Equations (33)–(35):
τ i j t + 1 = 1 ρ τ i j t + Δ τ i j t , t + 1 + Δ τ i j * t , t + 1
Δ τ i j t , t + 1 = k = 1 m Δ τ i j k t , t + 1
Δ τ i j * t , t + 1 = k = 1 m Δ τ i j k t , t + 1
where ρ ( 0 < ρ < 1 ) represents the pheromone volatilization rate. Δ τ i j ( t , t + 1 ) represents the pheromone increment of path (i, j) as the sum of the pheromone increments left by all ants that traverse path (i, j) in the cycle. Δ τ i j k represents the amount of pheromone released by ant k to edge i , j . Δ τ i j * represents the pheromone increment of the elite path i , j . Δ τ i j k represents the amount of pheromone released by elite ant k to edge i , j .
(2) Common ant pheromone update strategy: Δ τ i j k is updated for all routings according to the nondominance rank.
Δ τ i j k t , t + 1 = C P C N / n ,   n = 1 ,   2 , 3 , 4   0 , o t h e r
where k represents the k t h ant. C P represents the pheromone constant, C N represents the nondominant constant, and n represents the nondominated rank of the routing in this iteration.
(3) Elite routing pheromone update strategy: Δ τ i j k updated pheromone concentrations for elite routing with a nondominance rank of 1, 2.
Δ τ i j * k t , t + 1 = ( 6 n ) C P C N / n ,   n = 1,2   0 , o t h e r
(4) Max-min pheromone limits: Since there are various nondominance ranks, a large difference in pheromone concentration tends to occur between paths after the pheromone is updated. A too large difference in pheromone concentration can easily lead to premature stalling of the search, increasing susceptibility of the algorithm to falling into local optima. To avoid this, explicit limits are imposed on the minimum and maximum pheromone concentrations in the pheromone update strategy, as shown in Equation (38). After each iteration, it must be ensured that the pheromone concentration adheres to the limits.
τ i j t + 1 = log a τ i j t + 1 2 σ , τ i j τ m i n τ m i n , o t h e r
τ m i n = 1 2 c u s u m 1 ρ
where a represents the base of the logarithm, which is used to control the range of pheromone concentrations; σ represents the limiting factor; and c u s u m represents the number of nodes.

4.5. Local Search Strategy

The DDQN [56] is an off-policy DRL algorithm which is improved from the DQN. The DDQN decouples the two steps of selection of the target Q value action and computation of the target Q value to address the problem that the DQN potentially leads to maximization bias, which results in overestimation.
The decoupling process is as follows:
The DDQN constructs two networks: the evaluation network and the target network. The structure of the evaluation network and the target network are identical, but the weighting parameters are different.
First, the DDQN selects the action corresponding to the maximum Q value by the evaluation network, as shown in Equation (40):
a m a x = a r g m a x a Q s t + 1 , a ;   θ t
Then, the target network calculates the target Q value as shown in Equation (41):
y t = r t + γ Q ( s t + 1 , a m a x ;   θ t )  
Combining the states, the target Q value of is shown in Equation (42):
y t = r t i f   s   i s   t e r m i n a l r t + γ Q s t + 1 , a r g m a x a Q s t + 1 , a ;   θ t ;   θ t o t h e r w i s e
As shown in the Equation (43), the DDQN adopts the mean square error (MSE) of the estimated Q value and the target Q value as the loss function, and updates the evaluation network by gradient descent backpropagation.
L θ t = E [ ( y t Q ( s t , a ;   θ t ) ) 2 ]
The DDQN learns past experience offline by building a replay buffer. After N rounds of the replay buffer sampling, the weight parameters of the evaluation network are copied to the target network so as to realize the model self-learning.
To solve the MOPs, the DDQN adopts the Pareto solution set to train the network and guide the learning direction (details are in Section 4.5.1 and Section 4.5.2). The pseudo-code is shown in the table below (Algorithm 2):
Algorithm 2: Double DQN in MOPs
Input:
D : e m p t y   r e p l a y   b u f f e r ;   θ :   n e t w o r k   p a r a m e t e r s ;   θ : c o p y   o f   θ ;
N r : r e p l a y   b u f f e r   m a x i m u m   s i z e ;   N b : t r a i n i n g   b a t c h   s i z e ;   N : t a r g e t   n e t w o r k   r e p l a c e m e n t   f r e q ;
G : t h e   s e t   o f   P a r e t o   s o l u t i o n s   t h a t   h a v e   b e e n   o b t a i n e d G H : the Pareto frontier in  G
For  i  from 1 to size ( G H )
for  j  from 1 to size ( G i H )
   Take   ( s j , a j )   and   r j   .   Do   a   gradient   descent   step   with   loss   r j ( s j , a j ) Q ( s j , a j ;   θ ) 2   to   train   θ .
end
end
For e p i s o d e   e { 1,2 , M }  do
 Initialize frame sequence x←()
for  t { 0,1 , }  do
   Set   state   s x ,   sample   action   a   ~ π B
   Sample   next   frame   x   from   environment   ε   given   ( s , a ) and receive r , and append x to x
   Compare   the   ε   to   G to obtain the overall learning rate A l p h a t a r g e t , set r   r *   A l p h a t a r g e t
  if  x >  N f  then delete oldest frame x t m i n from x end
    Set   s x ,   and   add   ( s , a , r , s )   to   D ,
   Replacing the oldest tuple if D >  N r
    Sample   a   minibatch   of   N b   tuples   ( s , a , r , s )   ~   Unif   ( D )
    Construct   target   values ,   one   for   each   of   the   N b tuples
    a m a x s ;   θ = a r g m a x a Q ( s , a ;   θ ) ;
    y j = r i f   s   i s   t e r m i n a l r + γ Q s , a m a x ( s ;   θ ) ;   θ , o t h e r w i s e . ;
    Do   a   gradient   descent   step   with   loss   y i Q ( s , a ;   θ ) 2
    Set   θ θ   every   N steps
  end
end
end

4.5.1. Environment Setup

The evolutionary algorithm guides the reinforcement learning for searching. The following is the process of establishing the DDQN environment and directing its learning direction:
(1) Neural Network Construction
The input to the problem in this paper is a three-dimensional vector which has low dimensionality, so this paper constructs a function-fitting neural network which is structured as a fully connected network with three hidden layers. The input layer has three nodes. Each hidden layer has 40 nodes.
The activation functions are chosen to be linear transfer function (purelin), saturated linear transfer function (satlin), and positive linear transfer function (poslin). Multilayer networks use different activation functions for each layer to allow the neural network to approximate arbitrary functions.
The network chooses a multilayer neural network training function: the variable learning rate backpropagation algorithm (traingdx). It uses momentum optimization to accelerate convergence, and improves stability and accuracy through an adaptive learning rate to make the gradient descent smoother.
The network structure is shown in Figure 6.
After the initial construction of the neural network is completed, the initial parameter training of the neural network is required. In order to improve the effectiveness of parameter training and the rate of neural net approximation, the Pareto frontier is chosen as the input for initial training in this paper. The Pareto frontier, named H p a r e t o , is selected in the solution set G obtained by nondominated ordering; meanwhile the set with the largest nondominated rank, named L p a r e t o , is selected. The mean square error between the estimated Q value and the corresponding reward of H p a r e t o is then used as the loss function to train the initial evaluation network, as shown in Equation (44).
L θ t = E s , a , r , s ~ U H p a r e t o [ ( r ( s , a ) Q ( s , a ;   θ t ) ) 2 ]
r s , a represents the reward corresponding to Hpareto in the ( s , a ) environment. Q ( s , a ;   θ t ) represents the estimated Q value obtained by the evaluation network in the ( s , a ) environment.
(2) Reward Return Design
This paper studies the multiobjective VRP problem where the reward value is related to the pheromone between the routes, the heuristic function, and the advance time. The rewards are shown in Equation (45):
r t = τ i j α ( t ) × η i j β ( t ) × ( 1 / w j ( t j ) ) δ  
This equation simultaneously takes full account of multiobjectives, so that when an agent completes an action choice that causes a change in the state, its corresponding reward also changes. This design allows for the effective mapping of the multiple objectives of the study to the rewards of deep reinforcement learning.
For the VRP problem, the optimal solution of the subpaths can easily cause the algorithm to fall into a local optimum, so this paper proposes an overall learning rate called A l p h a t a r g e t . After the action selection is finished to obtain a complete path for all the workstation requirements, the IGD [57] metrics of this path with the two path sets of H p a r e t o and L p a r e t o are calculated to obtain I G D _ L p a r e t o and I G D _ H p a r e t o .Then A l p h a t a r g e t is calculated according to Equation (46), and the r t is updated according to Equation (47). A l p h a t a r g e t quantifies the advantages and disadvantages of the obtained path as a ratio of distances from H p a r e t o and L p a r e t o , in order to correct the rewards of the various subpaths of this path, so that paths with smaller IGDs have a higher advantage, whereas paths with larger IGDs have a larger disadvantage.
A l p h a t a r g e t = I G D _ L p a r e t o / I G D _ H p a r e t o
r t = A l p h a t a r g e t r t  
The structure of the DDQN algorithm is shown in Figure 7.

4.5.2. Output Designs

The reinforcement learning results are used to improve the optimization performance of the evolutionary algorithm.
Since the training of DDQN neural networks needs to rely on the replay buffer, and the solution sets obtained between adjacent iterations of the algorithm have a certain degree of approximation, this leads to the difficulty of having significant differences in the replay buffers of adjacent iterations, thus making it difficult for neural networks to perform effective parameter optimization. In order to improve the effectiveness of neural network optimization, this paper sets the neural network to be trained at a specific number of iterations. For the evolutionary algorithm, the greater the number of iterations, the better the results and the more iterations are needed for further optimization of the iteration results, so the neural network training is set at 50, 100, and 200 generations.
The trained network obtains 40 complete paths by action selection, which are recorded in the set G , to participate in nondominated sorting in the next iteration. The pseudo-code is as follows (Algorithm 3):
Algorithm 3: Route Generation by DDQN
Input:
Q s , a ;   θ : t h e   t r a i n e d   n e t w o r k ; epsilon p a r a m e t e r   o f   g r e e d y   p o l i c y ; N n u m b e r   o f   r o u t e s ;
V :   set   of   workstations ;   c u n s u m :   number   of   workstations ;   s 0 : t h e   o u t s e t
While  N 40  do
Initialize   parameters :   s     s 0 ;   a c t i o n   ;
While  s i z e ( a c t i o n ) c u n s u m  do
The   feasible   solutions   selected   from   the   unassigned   workstations   are   recorded   in   V a
  if  V a is not empty
   if    rand   <   e p s i l o n
     a = a r g m a x a Q s , a ;   θ ;   a V a .   Set   s a ,   append   a   to   a c t i o n
   else
     Take   a   random   integer   i   from   1   to   size   ( V a ) ,   select   the   ith   action   a i   in   V a ,   a a i ;
     Set   s a ,   append   a   to   a c t i o n
   end
  else
    Set   s     s 0
  end
add   a c t i o n   to   G
end
end
Output:
G : t h e s e t o f r o u t e s ;
In the probabilistic selection of nodes, the current workstation i and the next workstation j are used as inputs into the trained network to obtain the Q value between workstations to participate in the probabilistic selection as shown in Equation (48).
Q i j = Q s i , j ;   θ t , ( j E )
s i represents the state of the current node i ; j represents the next node; E represents the set of feasible nodes.

4.6. Node Transfer Probability Rule

NSACOWDRL uses a roulette wheel for node selection. The probability formula for participation in the roulette wheel is chosen by Equation (49).
J = p i j t , o t h e r Q i j ( t ) , i t e r U
J represents the probabilistic selection method. p i j t represents the selection based on probability formula and Q i j ( t ) represents the selection based on Q table. U represents the set of a specific number of iterations.
In the JIT-RMOVRPTW, the traditional time windows are eliminated, so the time window width is canceled. In the JIT-RMOVRPTW, the movement of ants is related to the pheromone concentration, path distance, waiting time, and the value of Q table. As shown in Equation (50), the probability formula of the JIT-RMOVRPTW is obtained based on the mathematical model of the problem.
p i j k t = τ i j α ( t ) × η i j β ( t ) × ( 1 / w j ( t ) ) δ × Q i j γ ( t ) s J B k τ i s α ( t ) × η i s β ( t ) × ( 1 / w j ( t ) ) δ × Q i s ( S E T ) ( t ) 0 , o t h e r
γ = i t e r D D Q N i t e r ;   { D D Q N i t e r U | i t e r 2 D D Q N i t e r < i t e r }
η i j t = 1 d i j
where J represents the set of all neighboring nodes j of node i ; s represents the node that ant k is allowed to select next; τ i j represents the pheromone concentration of each pathway; η i j is the heuristic function, as shown in Equation (52); Q i j is as shown in Equation (48), α is the pheromone concentration factor, β is the heuristic function factor, γ is the Q value factor, and δ is the waiting time factor.

4.7. Robust Multiobjective Optimization

The traditional robust selection strategy selects only the solution set with the highest feasibility. However, due to the conflicting nature among multiple objectives, it is difficult for the other objectives of the most feasible solution to be close to the optimum. For practical problems, appropriate delays are acceptable. So this paper proposes a robust selection method based on uniformly distributed weights in the solution space combined with Monte Carlo simulation. Since the solutions in the neighborhood centered at a certain point in the solution space have some degree of similarity, this robust optimization method splits the solution space by uniform weights, then converts the feasibility of solutions into Monte Carlo feasibility times. Finally, the optimal set of solutions in the solution space corresponding to each weight is found separately. This strategy obtains solutions that take into account robustness while guaranteeing diversity in multiple objectives.
Through this strategy, the more feasible solutions can be retained, and at the same time, based on the uniformly distributed weights, the solutions with lower feasibility but better other objectives are also retained, so as to provide the decision maker with more decision-making options in the face of different practical needs.
First, the solution space is partitioned by uniformly distributed weights, as shown in the Figure 8, with different colors representing different weights. Each weight represents a section of the solution space.
Then, each solution is associated with the weight which is closest to it. Following this, the Monte Carlo simulation is used to simulate each solution 1000 times to obtain the Monte Carlo feasible times for each solution, which is represented by M C ; the pseudo-code is shown in Algorithm 4. Each solution can be represented as ( f d , f a l p h a , f J I T , M C ) .
Algorithm 4: Monte Carlo Simulation
Input:
N V :   the   number   of   AGV ;   r o u t e = r o u t e 1 , r o u t e 2 , r o u t e N V   : a set of routes
r o u t e i = { n o d e 1 , n o d e 2 , n o d e ( m ) } :   a   route ;   t = { t 1,2 , t 1,3 , t n , n 1 } : travel time
s = { s 1 ,   s 2 , s n } :   serve   time ;   u i j ~ ζ ¯ :   uncertain   time ;   m a x s i m : the times of Monte Carlo simulation
M C : empty set of Monte Carlo feasible times
for  i = 1 :   l e n g t h ( r o u t e )
for j = 1 :   l e n g t h ( r o u t e ( i ) )
  if  j = 1
    t r o u t e ( i , j ) ~ = t 0 , r o u t e ( i , j ) + u i j ~ ζ ¯ ;
  else
    t r o u t e ( i , j ) ~ = t r o u t e i , j 1 ~ + s r o u t e ( i , j 1 ) + t r o u t e i , j 1 , r o u t e ( i , j ) + u i j ~ ζ ¯ ;
  end
end
for  t i m e s   = 1 :   m a x s i m
  if  r o u t e ( i ) satisfy constraints
    M C ( i ) = M C i + 1;
  end
end
end
Output:
M C : Monte Carlo feasible times for each solution
For each weight, if there exists at least one solution associated with it, the solution with the highest M C is sought, the Monte Carlo times of this solution is represented as M C e , and the solutions in the same weight for which M C is less than M C e are deleted. The remaining solutions are the optimal set of solutions in the solution space represented by that weight. The final result is shown in Figure 9, which is the set of robust Pareto solutions that guarantees the diversity of the algorithm. The points in different colors represent the solutions corresponding to weights with the same color.

5. Experiments and Analyses

The comparison experiments to test the performance of NSACOWDRL is discussed in this section. In this section, we compare the performance of multiple multiobjective algorithms under different instances, NSACOWDRL under different JIT models, and NSACOWDRL under different network settings, respectively.

5.1. Experimental Setting

To test the performance of NSACOWDRL, four instances of Solomon’s VRPTW benchmark problems, named C202, C206, RC202, RC204, and one instance, named RL—according to the real case from a manufacturer in China—are adopted, in which each of the instances is tested with three different interference coefficients.
Based on the model and the experience of the manufacturer, the following modifications were made to the instances:
(1) The company divides the distribution time of one day into several cycle times. The different distribution tasks were performed at each cycle.
(2) All the workstations are available all day.
(3) The assembly workshop has standardized the delivery of materials. All the materials were delivered in standardized boxes, and all the materials required at the same station during a cycle were placed in the same carts.
(4) In this case, due to the constraints of the assembly workshop and the safety of the AGV operation, the maximum number of carts is nine.
Table 1 shows the dataset of RL. Figure 10 shows a simple illustrative scenario to visualize the layout and process of material delivery. The dotted lines indicate the routes on which the AGVs can run. The small rectangles joined together represent workstations. The black dots indicate the stations that need to be distributed.
The NSACOWDRL was executed for 250 generations. The number of ants was set to 40. α , β , γ, and δ in the node transition probability formula were set as 1, 2, 2, and 2, respectively. The parameter ρ in the pheromone update formula was set to 0.2. The parameter H was set to 10. In the DDQN, the Buffer size was set to 60,000, the epsilon was set to 0.9, the Gama was set to 0.9, and the Target update interval was set to 1200.The parameters selected were not necessarily optimal for each model. This paper did not strictly select the optimal parameters.

5.2. Performance Metrics

In MOPs, the diversity and convergence of the algorithm are the main performance metrics that need to be evaluated. A single metric has difficulty comprehensively measuring the performance of the algorithm. In this paper, two performance metrics, HV [1] and IGD, are adopted to evaluate the performance of the algorithm. These metrics are described below:
As shown in Equation (53), the HV metric is the hypervolume metric, which is a measure of the diversity of an algorithm by calculating the volume of the target space formed by the set of nondominated solutions and reference points. The larger the HV is, the better the diversity of the algorithm.
H V = L e b ( i = 1 S v i )
L e b represents the Lebesgue measure, which is used to measure the volume. | S | represents the number of nondominated solution sets, and v i represents the hypervolume formed by the reference point and the ith solution in the solution set.
As shown in Equation (54), the IGD metric is an inverse generational distance metric: a comprehensive performance metric that evaluates the convergence and distribution performance of the algorithm by calculating the average minimum distance from each point on the true Pareto front (PF) to the set of individuals obtained by the algorithm. The smaller the IGD is, the better the convergence and distribution performance of the algorithm.
I G D P , P * = x P * m i n y P d i s x , y P *
P is the solution set obtained by the algorithm, P is a set of uniformly distributed reference points sampled from the Pareto front, and d i s x , y represents the Euclidean distance between the point x in the reference set P and the point y in the solution set P .
Because it is difficult to find the true Pareto front in real scenarios, the PF is approximated by considering all nondominated solutions obtained by all algorithms.
In this experiment, the normalized metrics were used to unify the measures between the different objectives.

5.3. Experimental Results

Based on the dataset, to verify the effectiveness of the NSACOWDRL, experiments were carried out on the JIT-RMOVRPTW models successively. In the experiments, to demonstrate the superiority of NSACOWDRL for multiobjective problems, NSACOWDRL compares NSACO, NSGA-III, NSGA-II, and MOEA/D. Twenty independent runs were used to test the performance of the proposed algorithm in each experiment. The experiments presented the results of the proposed algorithm in terms of HV and IGD.

5.3.1. Results of the JIT-RMOVRPTW

(1) Analysis of the Performance Metric
Table 2 and Table 3 present the mean and variance of the HV and IGD metrics for different algorithms on the JIT-RMOVRPTW dataset with three interference coefficients, respectively. The leftmost column represents the instance numbers. The second column represents three different interference coefficients, which are triangularly distributed. “Small”, ”Middle”, and “Large” represent interference coefficients from small to large. For the two rows that follow each disturbance coefficient, the top row represents the mean of the metrics obtained after the corresponding algorithm has been run 20 times independently with this disturbance coefficient, and the bottom row represents the variance of the metrics obtained.
As shown in Table 2 and Table 3, NSACOWDRL performs better with superior metrics in instances, and has more stable metrics. In comparison, in RL, rc204, and rc202, as the interference coefficient increases, the performance of NSGA-II starts to improve, and the performance of MOEA/D starts to decrease. However, in c202 and c206, as the interference coefficient increases, the performance of NSGA-II starts to decrease, and the performance of MOEA/D starts to improve. Meanwhile, NSGA-III performance is more stable. The performance of the algorithms is strongly influenced by the instances, where the performance of NSACO, NSGA-II, NSGA-III, and MOEA/D have advantages in different instances, whereas NSACOWDRL is always superior.
To determine the differences between the algorithms on different metrics, a two-by-two comparison p value test was carried out. As shown in Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15, the p value is represented by the heatmap. The X-axis is the interference coefficient, and the p values at different interference coefficients are shown on either side of the breakpoint in the X-axis. The upper triangle of each large square is the p value for HV and the lower triangle is the p value for IGD, where p values greater than 0.05 are marked in black.
The p-value plots show that the algorithms differ significantly between different instances and between different interference coefficients. NSACOWDRL is significantly better than the other algorithms, especially on the C202, C206, RL, whereas on RC202, and RC204, the performance of MOEA/D is close to that of NSACOWDRL in terms of the IGD metrics. Meanwhile, as the interference coefficient increases, the variation between algorithm performances begins to decrease on C202, C206, and RC204, whereas on RC202, RL, the variation fluctuates. Algorithm performance is affected by the combination of the instances and the interference coefficients; as the interference coefficients increase, the variation between algorithms is affected more in the RC-series instances than in the C-series.
As the result shows, the NSACOWDRL outperforms the other algorithms in terms of convergence and diversity when solving the JIT-RMOVRPTW. Furthermore, the NSACOWDRL has better generalization in solving problems for different scenarios.

5.3.2. Discussions

The results of the algorithms on the different instances show superiority of the NSACOWDRL on the proposed model. To evaluate the impact of multiple JIT strategies on the performance of the algorithm, the following three JIT strategies are compared:
Strategy 1: Consider transportation capacity and the slack time.
J I T i = b i b i a i p 1 q i Q
Strategy 2: Consider the slack time.
J I T i = b i b i a i p
Strategy 3: Schedule advance time for the time window.
J I T i = a d v a n c e d   t i m e
The results are shown in Table 4. “mean gap” represents the gap between the best mean and the worst mean for different interference coefficients of the same strategy. The NSACOWDRL has better performance in both models, and the solutions obtained by the NSACOWDRL have more stable metrics. In comparison, in different models, as the interference coefficient increases, NSGA-III gradually approaches NSGA-II in terms of convergence, and finally its performance is weaker than that of NSGA-II. Meanwhile, the diversity of MOEA/D becomes progressively weaker than that of NSGA-III, and the convergence of MOEA/D begins to weaken to approach that of NSGA-II. Faced with different JIT strategies, NSGA-II, NSGA-III and MOEA/D have their own strengths and weaknesses in the performance, while NSACOWDRL always keeps the superiority.
Thus, NSGA-III, and MOEA/D are susceptible to interference coefficients and JIT strategies; however, the NSACOWDRL has better generalizability when faced with different models.
By analyzing the “mean gap”, as the interference coefficient increases, the HV and IGD metrics of NSACOWDRL of Strategy 1 have the smallest degradation, followed by Strategy 2, and the algorithm of Strategy 3 has the most degradation. Compared with other strategies, Strategy 1 can maintain the performance of the algorithm better, which verifies the superiority of Strategy 1.

5.4. Different Parameter Settings

In the previous experiments, this paper did not make any serious attempt to find the optimal parameter settings for NSACOWDRL. For the DDQN, the type of neural network affects the performance of the algorithm. In this section, based on the proposed model, the neural networks of the DDQN are set as the back propagation (BP), radial basis function (RBF), the general regression network (GRNN), and product-based network (PNN), respectively, so as to test the effects brought by different networks.
Table 5 shows the performance of the algorithm with different networks. Figure 16 shows the p value of the metrics. By comparison, it can be seen that PNN has the best convergence and diversity, PNN has the closest performance to GRNN, and RBF has the worst performance. At the same time, as the interference coefficient increases, the variability between different networks begins to decrease.
For illustration, Figure 17 shows the set of Pareto solutions in the approximate PF obtained by each algorithm with different interference coefficients. The interference coefficient increases from left to right and then from top to bottom in the subplots of Figure 16. As the interference coefficient increases, each algorithm has more Pareto solutions which are more widely distributed in the solution space, and the variation in the distribution of solutions between algorithms begins to decrease. The figure shows that the solutions of NSACOWDRL _ PNN are distributed at all levels between (0, 1) on the 1 / a l p h a axis in all subplots, while the solutions of NSACOWDRL _ RBF are concentrated in two blocks. In terms of solution space distribution, the solution distribution obtained by NSACOWDRL _PNN is comparable to NSACOWDRL _ GRNN which is better than the other algorithms.
From these results, it can be shown that when adjusting the parameters of the algorithm, choosing the appropriate networks can effectively improve the performance of the algorithm. The appropriate neural network is selected by the type of problem, the Convolutional Neural Network (CNN) can be used to deal with discrete action space problems, while GRNN can be used to deal with nonlinear problems, which requires different network architectures and tuning for different problems.

6. Conclusions

This paper constructs the multiobjective AGV routing planning mathematical model of the JIT-RMOVRPTW for assembly workshops which considers the uncertainty in the transportation time. Based on the constraints, the metrics alpha—which measures the feasibility of the solution under uncertainty—is introduced into the model. To solve the problem, the two-stage NSACOWDRL is proposed. In stage 1, the NSACOWDRL adopts the nondominated routing selection method through small habitat protection based on reference points to obtain the set of nondominated routes, which ensures the diversity of the NSACOWDRL. The nondominated rank is introduced into the pheromone update strategy, which adopts the elite routing update method to protect better routings and uses the maximum-minimum pheromone strategy to prevent the algorithm from easily falling into local optimality due to an overly large gap between pheromone concentrations. Then, the DDQN is introduced as a local search algorithm which trains the DDQN networks by using nondominated solution sets. The trained network is used to generate routes to participate in the next nondominated sorting; in addition, the network participates in the probability formula. The state transfer probability formula based on objectives, the DDQN, and constraints is proposed in the NSACOWDRL. In stage 2, the feasibility of the nondominated solutions obtained in stage 1 was quantified by Monte Carlo simulation as Monte Carlo times, which were then robustly selected for diversity based on uniform weights in the solution space to obtain the Pareto solution set. The JIT-RMOVRPTW deals with the conflict between punctuality and robustness in uncertain environments in a better way. Meanwhile, based on multidimensional space, the multiobjective problem is also solved better. The NSACOWDRL complements evolutionary algorithms with deep reinforcement learning algorithms to enhance the optimization capability.
The experimental results among the different algorithms under different disturbance coefficients verify the superiority of the NSACOWDRL in terms of diversity and convergence in the robust VRP problem. A comparison of the models for different JIT strategies also reveals the superiority of the proposed model. The experiments demonstrate that NSACOWDRL has better generalization in different scenarios.
The uncertainty of material requirements due to the volatility of processing times, equipment failures, and other issues will be further considered. Future research can further enhance the complexity of the model and further optimize the material distribution scheme. Through the comparison experiments of different networks, DRL with different network structures has a large impact on the model performance, therefore the research on DRL models needs to be further deepened.

Author Contributions

Conceptualization, Y.C. and M.C.; methodology, Y.C., W.Y., F.Y. and M.C.; software, F.Y. and M.C.; validation, F.Y. and M.C.; formal analysis, H.L. and M.C.; investigation, F.Y. and H.L.; resources, Y.C.; data curation, M.C.; writing—original draft preparation, M.C.; writing—review and editing, W.Y.; visualization, F.Y., H.L. and M.C.; supervision, Y.C. and W.Y.; project administration, M.C.; funding acquisition, Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Natural Science Foundation of Zhejiang province, China under the Grant Nos. LGG22G010002 and Nos. 52005447, Zhejiang Provincial Natural Science Foundation of China under Grant Nos. LQ21E050014.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The instance dataset presented in the study is included in the article. The benchmark problem datasets presented in the study are openly available in [VRPTE] at [https://www.sintef.no/projectweb/top/vrptw/solomon-benchmark/].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zitzler, E.; Thiele, L. Multiobjective Evolutionary Algorithms: A Comparative Case Study and the Strength Pareto Approach. IEEE Trans. Evol. Comput. 1999, 3, 257–271. [Google Scholar] [CrossRef]
  2. Liu, X.; Chen, Y.-L.; Por, L.Y.; Ku, C.S. A Systematic Literature Review of Vehicle Routing Problems with Time Windows. Sustainability 2023, 15, 12004. [Google Scholar] [CrossRef]
  3. Li, B.; Wu, G.; He, Y.; Fan, M.; Pedrycz, W. An Overview and Experimental Study of Learning-Based Optimization Algorithms for the Vehicle Routing Problem. IEEE/CAA J. Autom. Sin. 2022, 9, 1115–1138. [Google Scholar] [CrossRef]
  4. Asghari, M.; Mirzapour Al-e-hashem, S.M.J. Green Vehicle Routing Problem: A State-of-the-Art Review. Int. J. Prod. Econ. 2021, 231, 107899. [Google Scholar] [CrossRef]
  5. Gunawan, A.; Kendall, G.; McCollum, B.; Seow, H.-V.; Lee, L.S. Vehicle Routing: Review of Benchmark Datasets. J. Oper. Res. Soc. 2021, 72, 1794–1807. [Google Scholar] [CrossRef]
  6. Salehi Sarbijan, M.; Behnamian, J. Emerging Research Fields in Vehicle Routing Problem: A Short Review. Arch. Computat. Methods Eng. 2023, 30, 2473–2491. [Google Scholar] [CrossRef]
  7. Zhang, H.; Ge, H.; Yang, J.; Tong, Y. Review of Vehicle Routing Problems: Models, Classification and Solving Algorithms. Arch. Computat. Methods Eng. 2022, 29, 195–221. [Google Scholar] [CrossRef]
  8. Ni, Q.; Tang, Y. A Bibliometric Visualized Analysis and Classification of Vehicle Routing Problem Research. Sustainability 2023, 15, 7394. [Google Scholar] [CrossRef]
  9. Soysal, M.; Çimen, M. A Simulation Based Restricted Dynamic Programming Approach for the Green Time Dependent Vehicle Routing Problem. Comput. Oper. Res. 2017, 88, 297–305. [Google Scholar] [CrossRef]
  10. Reihaneh, M.; Ghoniem, A. A Branch-and-Price Algorithm for a Vehicle Routing with Demand Allocation Problem. Eur. J. Oper. Res. 2019, 272, 523–538. [Google Scholar] [CrossRef]
  11. Wang, Y.; Zhou, J.; Sun, Y.; Fan, J.; Wang, Z.; Wang, H. Collaborative Multidepot Electric Vehicle Routing Problem with Time Windows and Shared Charging Stations. Expert Syst. Appl. 2023, 219, 119654. [Google Scholar] [CrossRef]
  12. Pierre, D.M.; Zakaria, N. Stochastic Partially Optimized Cyclic Shift Crossover for Multi-Objective Genetic Algorithms for the Vehicle Routing Problem with Time-Windows. Appl. Soft Comput. 2017, 52, 863–876. [Google Scholar] [CrossRef]
  13. Wang, J.; Xu, Z.; He, M.; Xue, L.; Xu, H. Optimization of Pickup Vehicle Scheduling for Steel Logistics Park with Mixed Storage. Appl. Sci. 2024, 14, 3628. [Google Scholar] [CrossRef]
  14. Geng, Y.; Li, J. An Improved Hyperplane Assisted Multiobjective Optimization for Distributed Hybrid Flow Shop Scheduling Problem in Glass Manufacturing Systems. CMES 2022, 134, 241–266. [Google Scholar] [CrossRef]
  15. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Trans. Evol. Computat. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  16. Corne, D.; Jerram, N.; Knowles, J.; Oates, M. PESA-II: Region-Based Selection in Evolutionary Multiobjective Optimization. In Proceedings of the 6th International Conference on Parallel Problem Solving from Nature, PPSN VI, Paris, France, 18–20 September 2000. [Google Scholar]
  17. Zhang, Q.; Li, H. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  18. Liu, H.-L.; Gu, F.; Zhang, Q. Decomposition of a Multiobjective Optimization Problem Into a Number of Simple Multiobjective Subproblems. IEEE Trans. Evol. Comput. 2014, 18, 450–455. [Google Scholar] [CrossRef]
  19. Cheng, R.; Jin, Y.; Olhofer, M.; Sendhoff, B. A Reference Vector Guided Evolutionary Algorithm for Many-Objective Optimization. IEEE Trans. Evol. Comput. 2016, 20, 773–791. [Google Scholar] [CrossRef]
  20. Zitzler, E.; Künzli, S. Indicator-Based Selection in Multiobjective Search. In Proceedings of the Parallel Problem Solving from Nature—PPSN VIII, Birmingham, UK, 18–22 September 2004; Yao, X., Burke, E.K., Lozano, J.A., Smith, J., Merelo-Guervós, J.J., Bullinaria, J.A., Rowe, J.E., Tiňo, P., Kabán, A., Schwefel, H.-P., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 832–842. [Google Scholar]
  21. Beume, N.; Naujoks, B.; Emmerich, M. SMS-EMOA: Multiobjective Selection Based on Dominated Hypervolume. Eur. J. Oper. Res. 2007, 181, 1653–1669. [Google Scholar] [CrossRef]
  22. Li, K.; Kwong, S.; Cao, J.; Li, M.; Zheng, J.; Shen, R. Achieving Balance between Proximity and Diversity in Multi-Objective Evolutionary Algorithm. Inf. Sci. 2012, 182, 220–242. [Google Scholar] [CrossRef]
  23. Beyer, H.-G.; Sendhoff, B. Robust Optimization—A Comprehensive Survey. Comput. Methods Appl. Mech. Eng. 2007, 196, 3190–3218. [Google Scholar] [CrossRef]
  24. Xia, T.; Li, M. An Efficient Multi-Objective Robust Optimization Method by Sequentially Searching From Nominal Pareto Solutions. J. Comput. Inf. Sci. Eng. 2021, 21, 041010. [Google Scholar] [CrossRef]
  25. Jin, Y.; Branke, J. Evolutionary Optimization in Uncertain Environments-a Survey. IEEE Trans. Evol. Comput. 2005, 9, 303–317. [Google Scholar] [CrossRef]
  26. Scheffermann, R.; Bender, M.; Cardeneo, A. Robust Solutions for Vehicle Routing Problems via Evolutionary Multiobjective Optimization. In Proceedings of the 2009 IEEE Congress on Evolutionary Computation, Trondheim, Norway, 18–21 May 2009; pp. 1605–1612. [Google Scholar]
  27. He, Z.; Yen, G.G.; Yi, Z. Robust Multiobjective Optimization via Evolutionary Algorithms. IEEE Trans. Evol. Comput. 2019, 23, 316–330. [Google Scholar] [CrossRef]
  28. Deb, K.; Jain, H. An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints. IEEE Trans. Evol. Comput. 2014, 18, 577–601. [Google Scholar] [CrossRef]
  29. Leng, K.; Li, S. Distribution Path Optimization for Intelligent Logistics Vehicles of Urban Rail Transportation Using VRP Optimization Model. IEEE Trans. Intell. Transp. Syst. 2022, 23, 1661–1669. [Google Scholar] [CrossRef]
  30. Zhang, Q.; Wu, L.; Li, J. Application of Improved Scatter Search Algorithm to Reverse Logistics VRP Problem. In Proceedings of the 2023 IEEE 5th International Conference on Power, Intelligent Computing and Systems (ICPICS), Shenyang, China, 14–16 July 2023; pp. 283–288. [Google Scholar]
  31. Sabar, N.R.; Goh, S.L.; Turky, A.; Kendall, G. Population-Based Iterated Local Search Approach for Dynamic Vehicle Routing Problems. IEEE Trans. Autom. Sci. Eng. 2022, 19, 2933–2943. [Google Scholar] [CrossRef]
  32. Feng, L.; Huang, Y.; Tsang, I.W.; Gupta, A.; Tang, K.; Tan, K.C.; Ong, Y.-S. Towards Faster Vehicle Routing by Transferring Knowledge From Customer Representation. IEEE Trans. Intell. Transp. Syst. 2022, 23, 952–965. [Google Scholar] [CrossRef]
  33. Li, J.; Xin, L.; Cao, Z.; Lim, A.; Song, W.; Zhang, J. Heterogeneous Attentions for Solving Pickup and Delivery Problem via Deep Reinforcement Learning. IEEE Trans. Intell. Transp. Syst. 2022, 23, 2306–2315. [Google Scholar] [CrossRef]
  34. Jia, Y.-H.; Mei, Y.; Zhang, M. A Bilevel Ant Colony Optimization Algorithm for Capacitated Electric Vehicle Routing Problem. IEEE Trans. Cybern. 2022, 52, 10855–10868. [Google Scholar] [CrossRef]
  35. Elgharably, N.; Easa, S.; Nassef, A.; El Damatty, A. Stochastic Multi-Objective Vehicle Routing Model in Green Environment With Customer Satisfaction. IEEE Trans. Intell. Transp. Syst. 2023, 24, 1337–1355. [Google Scholar] [CrossRef]
  36. Motaghedi-Larijani, A. Solving the Number of Cross-Dock Open Doors Optimization Problem by Combination of NSGA-II and Multi-Objective Simulated Annealing. Appl. Soft Comput. 2022, 128, 109448. [Google Scholar] [CrossRef]
  37. Yin, N. Multiobjective Optimization for Vehicle Routing Optimization Problem in Low-Carbon Intelligent Transportation. IEEE Trans. Intell. Transp. Syst. 2022, 24, 13161–13170. [Google Scholar] [CrossRef]
  38. Mazdarani, F.; Farid Ghannadpour, S.; Zandieh, F. Bi-Objective Overlapped Links Vehicle Routing Problem for Risk Minimizing Valuables Transportation. Comput. Oper. Res. 2023, 153, 106177. [Google Scholar] [CrossRef]
  39. Wang, J.; Weng, T.; Zhang, Q. A Two-Stage Multiobjective Evolutionary Algorithm for Multiobjective Multidepot Vehicle Routing Problem With Time Windows. IEEE Trans. Cybern. 2019, 49, 2467–2478. [Google Scholar] [CrossRef] [PubMed]
  40. Deb, S.; Tammi, K.; Gao, X.-Z.; Kalita, K.; Mahanta, P.; Cross, S. A Robust Two-Stage Planning Model for the Charging Station Placement Problem Considering Road Traffic Uncertainty. IEEE Trans. Intell. Transp. Syst. 2022, 23, 6571–6585. [Google Scholar] [CrossRef]
  41. Muñoz, C.C.; Palacios-Alonso, J.J.; Vela, C.R.; Afsar, S. Solving a Vehicle Routing Problem with Uncertain Demands and Adaptive Credibility Thresholds. In Proceedings of the 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Padua, Italy, 18–23 July 2022; pp. 1–8. [Google Scholar]
  42. Wang, J. Research on Route Planning Model and Algorithm of Electric Distribution Vehicle Based on Robust Optimization under the Background of Carbon Trading. In Proceedings of the 2023 IEEE International Conference on Sensors, Electronics and Computer Engineering (ICSECE), Jinzhou, China, 18–20 August 2023; pp. 1623–1628. [Google Scholar]
  43. Duan, J.; He, Z.; Yen, G.G. Robust Multiobjective Optimization for Vehicle Routing Problem with Time Windows. IEEE Trans. Cybern. 2022, 52, 8300–8314. [Google Scholar] [CrossRef] [PubMed]
  44. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant System: Optimization by a Colony of Cooperating Agents. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 1996, 26, 29–41. [Google Scholar] [CrossRef] [PubMed]
  45. Chen, F.; Xie, W.; Ma, J.; Chen, J.; Wang, X. Textile Flexible Job-Shop Scheduling Based on a Modified Ant Colony Optimization Algorithm. Appl. Sci. 2024, 14, 4082. [Google Scholar] [CrossRef]
  46. Kurian, A.M.; Onuorah, M.J.; Ammari, H.M. Optimizing Coverage in Wireless Sensor Networks: A Binary Ant Colony Algorithm with Hill Climbing. Appl. Sci. 2024, 14, 960. [Google Scholar] [CrossRef]
  47. Huang, Y.-H.; Blazquez, C.A.; Huang, S.-H.; Paredes-Belmar, G.; Latorre-Nuñez, G. Solving the Feeder Vehicle Routing Problem Using Ant Colony Optimization. Comput. Ind. Eng. 2019, 127, 520–535. [Google Scholar] [CrossRef]
  48. Li, Y.; Soleimani, H.; Zohal, M. An Improved Ant Colony Optimization Algorithm for the Multi-Depot Green Vehicle Routing Problem with Multiple Objectives. J. Clean. Prod. 2019, 227, 1161–1172. [Google Scholar] [CrossRef]
  49. Jiao, Z.; Ma, K.; Rong, Y.; Wang, P.; Zhang, H.; Wang, S. A Path Planning Method Using Adaptive Polymorphic Ant Colony Algorithm for Smart Wheelchairs. J. Comput. Sci. 2018, 25, 50–57. [Google Scholar] [CrossRef]
  50. Ren, T.; Luo, T.; Jia, B.; Yang, B.; Wang, L.; Xing, L. Improved Ant Colony Optimization for the Vehicle Routing Problem with Split Pickup and Split Delivery. Swarm Evol. Comput. 2023, 77, 101228. [Google Scholar] [CrossRef]
  51. Luo, Q.; Wang, H.; Zheng, Y.; He, J. Research on Path Planning of Mobile Robot Based on Improved Ant Colony Algorithm. Neural Comput. Applic 2020, 32, 1555–1566. [Google Scholar] [CrossRef]
  52. Li, J.-Y.; Deng, X.-Y.; Zhan, Z.-H.; Yu, L.; Tan, K.C.; Lai, K.-K.; Zhang, J. A Multipopulation Multiobjective Ant Colony System Considering Travel and Prevention Costs for Vehicle Routing in COVID-19-like Epidemics. IEEE Trans. Intell. Transp. Syst. 2022, 23, 25062–25076. [Google Scholar] [CrossRef]
  53. Liu, K.; Zhang, M. Path Planning Based on Simulated Annealing Ant Colony Algorithm. In Proceedings of the 2016 9th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 10–11 December 2016; Volume 2, pp. 461–466. [Google Scholar]
  54. Wang, F.; Wang, J.; Chen, X. Evacuation Entropy Path Planning Model Based on Hybrid Ant Colony-Artificial Fish Swarm Algorithms. IOP Conf. Ser. Mater. Sci. Eng. 2019, 563, 052025. [Google Scholar] [CrossRef]
  55. Frías, N.; Johnson, F.; Valle, C. Hybrid Algorithms for Energy Minimizing Vehicle Routing Problem: Integrating Clusterization and Ant Colony Optimization. IEEE Access 2023, 11, 125800–125821. [Google Scholar] [CrossRef]
  56. van Hasselt, H.; Guez, A.; Silver, D. Deep Reinforcement Learning with Double Q-Learning. arXiv 2015. [Google Scholar] [CrossRef]
  57. Coello, C.A.C.; Cortés, N.C. Solving Multiobjective Optimization Problems Using an Artificial Immune System. Genet. Program. Evolvable Mach. 2005, 6, 163–190. [Google Scholar] [CrossRef]
Figure 1. JIT time setting.
Figure 1. JIT time setting.
Applsci 14 07135 g001
Figure 2. The optimization process of the NSACOWDRL.
Figure 2. The optimization process of the NSACOWDRL.
Applsci 14 07135 g002
Figure 3. Framework of the NSACOWDRL.
Figure 3. Framework of the NSACOWDRL.
Applsci 14 07135 g003
Figure 4. Hyperplane and reference points.
Figure 4. Hyperplane and reference points.
Applsci 14 07135 g004
Figure 5. Construction of the hyperplane.
Figure 5. Construction of the hyperplane.
Applsci 14 07135 g005
Figure 6. Construction of the network.
Figure 6. Construction of the network.
Applsci 14 07135 g006
Figure 7. The structure of the DDQN.
Figure 7. The structure of the DDQN.
Applsci 14 07135 g007
Figure 8. Uniformly distributed weights in the solution space.
Figure 8. Uniformly distributed weights in the solution space.
Applsci 14 07135 g008
Figure 9. Pareto solutions.
Figure 9. Pareto solutions.
Applsci 14 07135 g009
Figure 10. Material information dataset routing.
Figure 10. Material information dataset routing.
Applsci 14 07135 g010
Figure 11. p value heatmap of c202.
Figure 11. p value heatmap of c202.
Applsci 14 07135 g011
Figure 12. p value heatmap of c206.
Figure 12. p value heatmap of c206.
Applsci 14 07135 g012
Figure 13. p value heatmap of rc202.
Figure 13. p value heatmap of rc202.
Applsci 14 07135 g013
Figure 14. p value heatmap of rc204.
Figure 14. p value heatmap of rc204.
Applsci 14 07135 g014
Figure 15. p value heatmap of the real case.
Figure 15. p value heatmap of the real case.
Applsci 14 07135 g015
Figure 16. p value heatmap of networks.
Figure 16. p value heatmap of networks.
Applsci 14 07135 g016
Figure 17. The set of Pareto solutions in PF obtained by algorithms.
Figure 17. The set of Pareto solutions in PF obtained by algorithms.
Applsci 14 07135 g017aApplsci 14 07135 g017b
Table 1. Material information dataset of the real case.
Table 1. Material information dataset of the real case.
Material NumberX-CoordinateY-CoordinateNumber of
Delivery Boxes
Left Time WindowRight Time WindowService Time
0011.20000
16.930.41792180015
28.130.4112072015
39.330.42792180030
49.330.417272015
512.930.41240108015
612.930.41132108015
715.330.41240108015
815.330.41216108015
920.130.41288136815
1022.530.41828180015
1124.930.421200180030
1224.930.42450180030
1328.530.41450180015
1428.530.41540180015
1533.330.41180108015
166.925.311620180015
1710.525.35900180075
1811.725.311200180015
1911.725.32828180030
2021.325.3311572045
2122.525.311620180015
2227.325.35600180075
2327.325.31600180015
2428.525.31540180015
Table 2. Mean and variance metric values of HV indicators.
Table 2. Mean and variance metric values of HV indicators.
Instance ζ ¯ NSACOWDRLNSACONSGA-IIINSGA-IIMOEA/D
c202Small0.7781070.6597470.4823350.6205860.582376
9.74E−050.0005940.0034650.0024090.002838
Middle0.7831010.6851520.5221980.6222890.612993
0.0001370.0006040.0034570.0049380.004171
Large0.7081880.6099130.4043090.5052730.508527
0.0001790.0012380.0044730.0034830.005629
c206Small0.6567160.5579980.4871610.6523180.595443
0.0002980.0005890.0037990.0072880.004559
Middle0.6354020.5142510.4405450.5610550.550337
0.0002540.0005320.0091690.0037290.003936
Large0.52320.4413540.3453110.4393410.478306
0.0003190.0005550.0059520.0024540.003588
rc202Small0.6356340.6004880.5602770.5761730.582909
0.0001160.0004390.0015760.0004630.001525
Middle0.5918630.5356740.4958360.4988520.567148
0.0003240.0005130.0009860.0018670.002845
Large0.5302640.4741980.4790610.5568310.543224
0.0008970.0004830.0050190.0014180.002381
rc204Small0.6304010.5978020.5400240.4716060.611194
0.0001830.0001920.002780.0030560.00235
Middle0.6463170.6170870.5576660.5712280.655838
0.0002360.000450.0019480.0025570.002609
Large0.6077410.573180.559360.5826080.559298
0.0015510.0011470.0039310.0012710.008846
RLSmall0.5516970.5152650.4893680.4365660.512563
0.0005030.00060.0016420.0007420.002456
Middle0.5301470.5079490.4935250.3830.417351
0.0002058.74E−050.001930.0010070.00279
Large0.4487190.4277080.4065590.3119040.301467
0.0005950.0003190.0031670.0010170.001066
Table 3. Mean and variance metric values of IGD indicators.
Table 3. Mean and variance metric values of IGD indicators.
Instance ζ ¯ NSACOWDRLNSACONSGA-IIINSGA-IIMOEA/D
c202Small0.0670980.1110560.2213640.1554250.149471
9.94E−055.76E−050.0018080.0011320.000783
Middle0.0709330.0952320.1952380.1561170.142658
0.0001247.45E−050.0006460.0008930.000335
Large0.0634980.0964620.223840.1795650.166596
5.61E−055.11E−050.0017060.0014940.001324
c206Small0.074710.1238590.2143640.0996140.157687
0.0001590.0001160.0016520.0004630.000542
Middle0.0896470.1211770.2051240.1199250.15276
6.96E−050.0001270.0017280.0002380.001202
Large0.1116210.141610.2076180.1178770.173173
0.0004490.0003450.0035930.0002720.002181
rc202Small0.0919290.0955770.1364150.120070.13582
7.24E−054.42E−050.0008040.0001650.000713
Middle0.0929140.1077020.1706860.1581570.141599
4.08E−055.77E−050.0006820.0002970.000587
Large0.1431310.1577950.2118150.1635510.184171
0.0003469.64E−050.0007070.000360.000859
rc204Small0.0994840.100020.1554380.1793270.124656
0.0001599.27E−050.0010780.0010970.000573
Middle0.1078970.1196880.2173140.1895790.22512
9.31E−056.03E−050.0011670.0013710.001906
Large0.1097580.1101540.1626790.1198910.172319
0.0002750.000140.0007650.0001930.002228
RLSmall0.0810810.0895460.1141810.1250060.085965
1.93E−052.56E−050.0002170.0003770.000176
Middle0.076560.0812230.127750.1320860.107114
1.51E−052.22E−050.0008730.0001830.000203
Large0.0758060.0759080.1669990.1135660.103795
3.33E−051.65E−050.0020577.47E−050.000376
Table 4. Mean and variance metric values of algorithms.
Table 4. Mean and variance metric values of algorithms.
HVStrategy ζ ¯ NSACOWDRLNSACONSGA-IIINSGA-IIMOEA/D
Strategy 1Small0.5516970.5152650.4893680.4365660.512563
0.0005030.00060.0016420.0007420.002456
Middle0.5301470.5079490.4935250.3830.417351
0.0002058.74E−050.001930.0010070.00279
Large0.4487190.4277080.4065590.3119040.301467
0.0005950.0003190.0031670.0010170.001066
mean gap18.67%16.99%17.62%28.56%41.18%
Strategy 2Small0.5489720.5088760.4694510.4234460.45982
0.000190590.000360.0012230.0017040.001119
Middle0.5767240.5588530.5102050.4373010.458765
6.62792E−050.0002530.0021380.0013830.001028
Large0.45498550.4389790.4307910.3288320.333339
9.52814E−050.0001920.0020570.0013570.000807
mean gap21.11%21.45%15.57%24.80%27.51%
Strategy 3Small0.6078420.5773770.5318160.5003550.537066
0.0001560.0002750.0017410.0009210.002507
Middle0.5483410.5175450.4813690.389150.397993
0.0002350.0003210.0052040.001210.001877
Large0.4523890.436850.4144190.3254470.29446
0.0001990.0003820.0017070.000780.001737
mean gap25.57%24.34%22.07%34.96%45.17%
IGDStrategy ζ ¯
Strategy 1Small0.0810810.0895460.1141810.1250060.085965
1.93E−052.56E−050.0002170.0003770.000176
Middle0.076560.0812230.127750.1320860.107114
1.51E−052.22E−050.0008730.0001830.000203
Large0.0758060.0759080.1669990.1135660.103795
3.33E−051.65E−050.0020577.47E−050.000376
mean gap6.51%15.23%31.63%14.02%19.74%
Strategy 2Small0.0701880.0786350.1127590.1206580.08723
1.14E−051.22E−050.0002140.0002634.24E−05
Middle0.0671060.0690430.1093410.1104830.093927
2.46E−051.16E−050.0005520.0001210.000114
Large0.0648210.0656440.1352980.1065420.091753
1.53E−056.89E−060.0016030.0001130.000309
mean gap7.65%16.52%19.19%11.70%7.13%
Strategy 3Small0.0834990.0855580.1167080.1260690.093247
2.57E−054.2E−050.000210.0001040.000116
Middle0.0825140.0850990.134170.1271820.110645
5.33E−052.18E−050.0005630.0001190.000152
Large0.0746740.0792240.133720.1151140.113762
1.47E−058.18E−060.0006230.000110.000174
mean gap10.57%7.40%13.01%9.49%18.03%
Table 5. Mean and variance metric values of HV and IGD for different networks.
Table 5. Mean and variance metric values of HV and IGD for different networks.
HV ζ ¯ NSACOWDRL _BPNSACOWDRL _RBFNSACOWDRL _PNNNSACOWDRL _GRNN
Small6.215E−016.208E−016.367E−016.427E−01
8.768E−049.602E−041.183E−035.848E−04
Middle6.037E−015.901E−016.068E−016.026E−01
2.763E−042.104E−041.639E−041.809E−04
Large4.874E−014.733E−014.763E−014.901E−01
7.729E−042.365E−045.044E−044.079E−04
IGD ζ ¯
Small6.881E−027.188E−026.537E−026.816E−02
3.990E−051.428E−052.453E−051.668E−05
Middle6.906E−026.994E−026.620E−026.826E−02
2.326E−052.518E−058.431E−061.017E−05
Large7.867E−027.937E−027.560E−027.539E−02
1.390E−059.746E−061.416E−051.164E−05
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Y.; Chen, M.; Yu, F.; Lin, H.; Yi, W. An Improved Ant Colony Algorithm with Deep Reinforcement Learning for the Robust Multiobjective AGV Routing Problem in Assembly Workshops. Appl. Sci. 2024, 14, 7135. https://doi.org/10.3390/app14167135

AMA Style

Chen Y, Chen M, Yu F, Lin H, Yi W. An Improved Ant Colony Algorithm with Deep Reinforcement Learning for the Robust Multiobjective AGV Routing Problem in Assembly Workshops. Applied Sciences. 2024; 14(16):7135. https://doi.org/10.3390/app14167135

Chicago/Turabian Style

Chen, Yong, Mingyu Chen, Feiyang Yu, Han Lin, and Wenchao Yi. 2024. "An Improved Ant Colony Algorithm with Deep Reinforcement Learning for the Robust Multiobjective AGV Routing Problem in Assembly Workshops" Applied Sciences 14, no. 16: 7135. https://doi.org/10.3390/app14167135

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop