Next Article in Journal
Preptimize: Automation of Time Series Data Preprocessing and Forecasting
Previous Article in Journal
Lester: Rotoscope Animation through Video Object Segmentation and Tracking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Swarm Intelligence Solution for the Multi-Vehicle Profitable Pickup and Delivery Problem

by
Abeer I. Alhujaylan
1,* and
Manar I. Hosny
2
1
Department of Information Technology, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia
2
Computer Science Department, College of Computer and Information Sciences, King Saud University, Riyadh 11362, Saudi Arabia
*
Author to whom correspondence should be addressed.
Algorithms 2024, 17(8), 331; https://doi.org/10.3390/a17080331
Submission received: 2 July 2024 / Revised: 17 July 2024 / Accepted: 25 July 2024 / Published: 31 July 2024
(This article belongs to the Section Algorithms for Multidisciplinary Applications)

Abstract

:
Delivery apps are experiencing significant growth, requiring efficient algorithms to coordinate transportation and generate profits. One problem that considers the goals of delivery apps is the multi-vehicle profitable pickup and delivery problem (MVPPDP). In this paper, we propose eight new metaheuristics to improve the initial solutions for the MVPPDP based on the well-known swarm intelligence algorithm, Artificial Bee Colony (ABC): K-means-GRASP-ABC(C)S1, K-means-GRASP-ABC(C)S2, Modified K-means-GRASP-ABC(C)S1, Modified K-means-GRASP-ABC(C)S2, ACO-GRASP-ABC(C)S1, ACO-GRASP-ABC(C)S2, ABC(S1), and ABC(S2). All methods achieved superior performance in most instances in terms of processing time. For example, for 250 customers, the average times of the algorithms was 75.9, 72.86, 79.17, 73.85, 76.60, 66.29, 177.07, and 196.09, which were faster than those of the state-of-the-art methods that took 300 s. Moreover, all proposed algorithms performed well on small-size instances in terms of profit by achieving thirteen new best solutions and five equal solutions to the best-known solutions. However, the algorithms slightly lag behind in medium- and large-sized instances due to the greedy randomised strategy and GRASP that have been used in the scout bee phase. Moreover, our algorithms prioritise minimal solutions and iterations for rapid processing time in daily m-commerce apps, while reducing iteration counts and population sizes reduces the likelihood of obtaining good solution quality.

1. Introduction

Over the past five years, delivery apps have experienced substantial growth, particularly food delivery apps, where COVID-19 propelled the industry forward a few years as millions of individuals in lockdown ordered food and other essential services online. The grocery delivery platform called Instacart reported that its objectives for 2022 were met during the third week of lockdown. In 2022, China emerged as the largest food delivery market, with a staggering market value of USD 42.5 billion. By 2029, the market size of the food delivery app industry as a whole is anticipated to reach USD 165 billion [1]. Moreover, 85% of Americans place online food orders via food ordering applications. Moreover, 80% of restaurants in the United States provide customers with the convenience of mobile ordering [2]. Similarly, food delivery applications in Saudi Arabia are providing instantaneous services and a streamlined user experience for individuals to enjoy food. As a result, new applications that vie to provide services, convenience, and comfort to over 35 million individuals emerge periodically. Examples of these applications that have become very popular in Saudi Arabia are Mrsool, Uber Eats, Talabat, Hunger station, Ninja, Jahez, To You, The Chifz, Shgardi, and NANA [3]. The escalating demand for food delivery applications has concurrently heightened the strain on delivery enterprises, which require expeditious algorithms to coordinate the transportation of goods in order to generate profits and guarantee client contentment. One of the problems that was presented in the literature that mimics the goal of food delivery apps, increasing profits, achieving user satisfaction, and reducing costs is the multi-vehicle profitable pickup and delivery problem (MVPPDP).
Gansterer et al. [4] introduced the MVPPDP, which is a subtype of the selective pickup and delivery problem (SPDP) and a novel variation of the well-known pickup and delivery problem (PDP). Within the constraints of a one-day planning horizon, the MVPPDP seeks to identify the most profitable routes by minimising travel expenses and maximising revenue collection. The MVPPDP is a static problem characterised by a standardised fleet of vehicles, a central depot, a collection of customers, and a predetermined quantity of requests and products. A customer pair, consisting of the pickup customer and the delivery customer, defines each request. By doing so, the products are conveyed from a pickup customer to a delivery customer in order to generate a revenue stream for the operation. MVPPDP real-world applications include mobile applications for food delivery services and delivery companies.
The MVPPDP is one of the combinatorial optimisation (CO) problem applications that receives particular attention from researchers in computer science and operations research. Solving a CO problem mainly requires searching to find the optimal or best possible solution from a limited set of feasible candidate solutions. Numerous algorithms can be used to solve CO problems, and these algorithms can be classified into two categories: exact algorithms, which are guaranteed to obtain optimal solutions in an adequate amount of time, and approximate algorithms, usually called heuristics or metaheuristics, which are not guaranteed to deliver an optimal solution but provide a good solution in a reasonable amount of time. The words heuristic and meta belong to the Greek language, where heuristic is from ‗heuriskein‘, meaning ‗to find‘ or ‗to discover‘ new methods and strategies to solve problems, while the prefix meta means ‗beyond, in an upper level. Metaheuristic methods can be classified based on the number of solutions used at the same time into two categories: single solution-based metaheuristics (S-metaheuristics) and population-based metaheuristics (P-metaheuristics). In S-metaheuristics, the focus is on modifying and improving a single candidate solution. On the other hand, P-metaheuristics preserve and improve several candidate solutions instead of just a single solution. They are based on iteratively improving solutions in a population and evolving the whole population until a set of good solutions is reached. Exploration-oriented features of P-metaheuristics have made them better than S-metaheuristics at diversifying the search space. The most famous examples of P-metaheuristics are algorithms that are inspired by natural behaviour such as genetic algorithms and swarm intelligence (SI) [5].
Generally, there are two phases that should be applied to obtain a good problem solution: solution construction and solution improvement. Solution construction involves creating one or more initial possible solutions that can be considered as a starting point for exploring the search space. Solution improvement is based on modifying the starting solutions to improve them, according to predefined strategies, until a high-quality solution is reached or stopping criteria have been met (e.g., a maximum number of iterations, or a specific amount of time) [5].
Due to the fact that the MVPPDP is an NP-hard problem (as it is a subtype of the SPDP, which is a special case of PDP) [4], metaheuristic algorithms can be employed to efficiently locate optimal solutions in a reasonable amount of time, particularly for problems of moderate to large sizes. In this paper, we introduce novel algorithms for tackling this problem, with the objective of enhancing the initial solution proposed in [6,7]. Our approach is based on the utilisation of the Artificial Bee Colony (ABC) algorithm, which is one of the widely recognised artificial intelligence algorithms known as SI. Diverse strategies were implemented with ABC, each demonstrating efficacy in accelerating the search procedure and reducing processing time.
The primary goals of this paper are (1) assisting food delivery applications and delivery companies in optimising their services through accelerated customer selection and route planning in order to maximise profits as well as customer satisfaction; (2) efficient transport planning that can help conserve energy resources by reducing the environmentally damaging effects of gas emissions; and that (3) for some tested instances, the proposed approaches achieve results that surpass the state-of-the-art methods documented in the literature for the MVPPDP.
The rest of this paper is organised as follows: a review of some related work is presented in Section 2. Section 3 introduces the formal problem definition. The proposed method is described in detail in Section 4. Section 5 illustrates the experimental results. Finally, conclusions and future work are presented in Section 6.

2. Related Studies

The MVPPDP is classified as a type of SPDP and its objective is to maximise profits while minimising travel expenses by visiting only profitable customer pairs. Achieving a favourable route that maximises the difference between the total revenue earned and the total ravelling cost is the objective of both the MVPPDP and the profitable tour problem (PTP). The primary distinction is that in the PTP, the vehicle route is not subject to any constraints, including but not limited to the maximum trip time, vehicle capacity limits, or precedence constraints [8]. Research focusing on the PTP is in short supply in the scientific literature [9]. Hence, we commence by presenting research pertaining to the MVPPDP.
Gansterer et al. [4] introduced the initial four algorithms. The variable neighbourhood search (VNS) metaheuristic was implemented in [10]. The greedy construction heuristic (C1) and the two-stage cheapest insertion heuristic (C2) were employed during the construction phase of the VNS. During the improvement phase, the subsequent neighbourhood operators were implemented: the pairwise backward exchange, the pairwise forward exchange, forward insertion, relocate pairs, backward insertion, inter-tour exchange, inter-tour insert, inter-dummy insert, inter-dummy exchange and the gravity centre exchange. The authors implemented two approaches for applying the neighbourhood operators: a sequential (Seq) method, which employs a predetermined sequence of neighbourhoods, and a self-adaptive (Sea) method, which adjusts itself based on the search status to determine the sequence of neighbourhoods. In order to compare the proposed method with an alternative algorithm, a guided local search (GLS) metaheuristic-based algorithm was ultimately implemented. An evaluation of the proposed algorithm was conducted on 36 randomised instances of varying sizes. In terms of solution quality, the experimental results demonstrated that both variants of the VNS outperformed the GLS.
Haddad [11] introduced a new algorithm to solve the MVPPDP. Pending a solution to the profitable pickup and delivery problem, they proposed an algorithm known as IPPD, which was derived from a combination of an iterated local search and random variable neighbourhood descent. During the search, the IPPD algorithm was not restricted to accepting only viable solutions. For the construction of the initial solution, the C1 heuristic utilised by Gansterer et al. [4] was implemented. To enhance the solution, several neighbourhood moves were implemented: pair swap or pair shift, block swap or block shift, pickup or delivery shift, inter-block swap or inter-block shift, gravity centre exchange, insert operator and remove operator, and inter-pair swap or inter-pair shift. On the benchmark instances suggested by Gansterer et al. [4], the proposed algorithm was evaluated. It demonstrated its effectiveness in handling small- and medium-sized cases by identifying new optimal solutions for six cases. However, for large instances, the performance of the proposed algorithm was inadequate.
In Alhujaylan and Hosny [6], the construction phase of the widely recognised greedy randomised adaptive search procedure (GRASP) was employed to develop preliminary solutions for the MVPPDP. They used the same dataset that was created by [4]. They conducted a performance comparison between the method and two construction heuristics that had been previously documented in the literature for constructing initial solutions to the MVPPDP. The experimental results demonstrated that the proposed method outperformed alternative construction heuristics, particularly in instances of a small size, where eight new best initial solutions to the problem were obtained.
In [7], Alhujaylan and Hosny presented new heuristics to construct initial solutions for the MVPPDP. The heuristics are based on first clustering the search space of the MVPPDP using three clustering methods: K-means, adaptive K-means, and ant colony optimisation (ACO). Then, a modified version of the GRASP has been used to construct the initial MVPPDP solution based on the results of each clustering algorithm. The proposed solution was applied in the dataset that was presented by the researchers of [4]. They compared their results with those from the other construction heuristics that have been previously used for the MVPPDP. The experimental results proved the effectiveness of the proposed algorithms in terms of both solution quality and processing time.
In [12], the authors incorporated time windows with uncertain travel times into the MVPPDP. The researchers utilised a subset of the database created by [4], which comprised twenty and fifty requests, corresponding to two and three vehicles, respectively. The objective of the client is to identify a solution that optimises the net profit, which is calculated as the difference between the revenue collected, the cost of the route, and the expense incurred due to time window violations. They took wide time windows and standard time windows into consideration. Under the assumption that travel times are independent Gamma-distributed random variables and with a risk-neutral perspective, they demonstrated that the anticipated costs of punctuality and early arrival can be calculated in a closed form. The authors suggested employing a graph search approach as the solution method. Experiments have demonstrated that precise solutions can be obtained in a reasonable amount of time.

3. Problem Definition

The MVPPDP is a predefined static problem having several constituents. It is postulated that a centralised depot is responsible for receiving customer requests. These requests comprise pickup and delivery customers in pairs. To fulfil these demands, we operate a fleet of identical vehicles that convey a variety of identical products from a subset of pickup customers to their respective delivery customers, thereby generating a specific profit. The following limitations of the MVPPDP must be considered:
  • The pairing constraint stipulates that a predefined customer pair is assigned to each request (pickup and delivery).
  • Precedence constraint: Priority must be given to visiting the pickup customer over the delivery customer.
  • The time limit for journeys: Every vehicle is subject to a daily travel time restriction that must not be surpassed during the service of customers.
One limitation is the capacity of each vehicle, which must not be surpassed during product collection from pickup customers.
  • Every vehicle should depart from and arrive at the depot with an empty load.
  • A single visit is permitted per customer.
The MVPPDP can be formally defined as follows [10]: Let G = V , A be a graph in which V = 0 , , 2 n + 1 defines the vertex set, where the vertex ( 0 , 2 n + 1 ) corresponds with a central depot, while the remaining vertices represent the pickup customers P = 1 , , n , and delivery customers D = n + 1 , , 2 n .   A = i , j : i , j V , i j defines the arc set, where each arc is associated with a non-negative routing cost c i j . Each delivery vertex i has a revenue r i to be gained when visiting it. Also, each vertex i has a supply q i (pickup, q i > 0 ) or demand (delivery, q n + i = q i ) . At the start, there is no supply or demand in the depot ( q 0 = 0 ) . In addition, each vertex i has service duration d i . There is a set of vehicles K = 1 , , m , and each vehicle has a load capacity C and maximum tour time T .
The objective function (OF) of the MVPPDP aims to maximise the total profit by subtracting the total travel costs from the total revenues. It can be formulated as follows:
M a x i m i z e i V j V k K ( r i c i j ) x i j k ,
The mathematical model of the MVPPDP can be formulated as in [4,6] as follows:
i V k K x i j k 1 j V
j V k K x i j k 1 i V
x i 0 k = 0 i V , k K
x 2 n + 1 , j k = 0 j V , k K
i V x i j k x j i k = 0 j V   0 ,   2 n + 1 ,   k K
j V x i j k x n + i , j k = 0 i P ,   k K
j = V x 0 j k = i = V x i , 2 n + 1 , k = 1 k K
x i j k = 1   Q j k = Q i k + q j i V ,   j V   0 , k K
Q i k C i V ,   k K
Q 0 k = 0 k K
B i k   B n + i , k i P
B j k   X i j k B i k + d i + t i j i V ,   j V ,   k K
B 2 n + 1 , k   T k K
  B o k = 0 k K
x i j k 0,1 i V ,   j V ,   k K
Q i k , B i k 0 i V , k K
Constraints (2) and (3) require that each vertex be visited at most once. Constraints (4) and (5) prevent any vehicle from entering the origin depot (beginning point) or leaving the destination depot (end point). Constraint (6) indicates flow conservation. Constraint (7) states that each client pair (pickup and delivery) must be served by the same vehicle. Constraint (8) requires that all vehicles start and return to the depot. Constraint (9) states that exceeding the vehicle capacity is not permitted. Constraint (10) specifies that the load of a vehicle is limited by C. Constraint (11) specifies that all vehicles start with an empty cargo. Constraint (12) indicates that the delivery customer cannot be visited before the pickup customer. Constraint (13) states that the earliest time to begin service at a vertex j is determined by adding the beginning of the service time at the vertex i to the service time at and the travel time between i and j. Constraint (14) indicates that the maximum time for each vehicle is limited by T. Constraint (15) specifies that each vehicle begins at the depot at time zero. Finally, decision variables are defined by (16) and (17).

4. Proposed Method

In the current study, we employed the Artificial Bee Colony (ABC) algorithm that was first proposed by Karaboga [13]. It mimics the collaborative and collective foraging behaviour of a bee colony. In the ABC algorithm, there are three different types of bees: scout bees, employed bees, and onlooker bees. Each type has a different role in the exploration and exploitation of food sources. A food source represents the solution to a problem, and better solutions belong to food sources with more nectar. The ABC algorithm works as follows: Initially, the scout bees are assigned randomly to the search space to find initial food sources. Then, the employed bees are sent out to exploit the places discovered by scout bees, with each employed bee assigned to one food source. After that, the employed bees start a neighbourhood search to find a better food source nearby. After finding a better food source, the employed bees leave all previous food sources and exploit only the better ones. These employed bees then return to the hive and share their information about discovered food sources with onlooker bees through what is called the waggle dance. The decision of whether the onlooker bees will follow the employed bees depends on the richness of the food sources. If the onlooker bees do decide to follow, they then become employed bees and repeat the procedure of the employed bees. After several iterations, the food source might be exhausted; at that point, the employed bees become scout bees and start searching randomly to find a new food source to replace the previous one. The flowchart of the ABC algorithm is illustrated in Figure 1 [14].
The ABC heuristic has been used for solving several VRP variants, such as being used by Szetoa et al. [15] to solve the capacitated vehicle routing problem, by Wu et al. [16] to solve the V R P with time windows, and by Yao et al. [17] to solve the periodic VRP.
In this paper, we aim to improve the initial solutions proposed in [7]. There are many metaheuristics that have been used to enhance the quality of solutions, such as local search (LS), simulated annealing (SA), and VNS. In our study, we also use a metaheuristic that combines greedy and randomised searches and is based on honeybee behaviour, namely the ABC. The main reason for selecting the ABC rather than other honeybee algorithms is its popularity, with the number of studies that have used the ABC compared to other variants reaching 54%, as reported in Karaboga’s survey [18]. For tackling the MVPPDP, two versions of the ABC are proposed: ABC without clustering (ABC) and ABC with clustering (ABC(C)), as will be explained next.

4.1. ABC without Clustering (ABC)

In this part of the research, we aimed to improve the initial solutions that have been achieved using the improved GRASP in [7]. The ABC metaheuristic consists of three phases: the employed bee phase, the onlooker bee phase, and the scout bee phase. The main goal of the first two phases is to increase the intensification of the search space to obtain good solutions, while the last phase aims to increase the diversification and prevent getting stuck in local optima. The details of our proposed algorithm are illustrated below.
  • Employed bee phase:
After assigning the initial solutions to the employed bees, they start a neighbourhood search to find better solutions nearby by randomly selecting one of a set of neighbourhood operators. The neighbourhood operators aim to make a small change to the solution to improve it. Thus, for each solution, one route is selected randomly to apply one operator that is also selected randomly. The set of neighbourhood operators is described below:
  • Relocate a pickup customer only: Choose one pickup customer randomly and relocate its position, such that the new pickup position must not come after its delivery customer.
  • Relocate a delivery customer only: Choose one delivery customer randomly and relocate its position, such that the new delivery position must not come before its pickup customer.
  • Relocate a customer pair: Choose a customer pair randomly and relocate their positions, preserving the precedence constraint (i.e., pickup before delivery).
  • Swap two customer pairs: Select two customer pairs randomly and swap their positions.
  • Two-Opt operator: Select two consecutive pairs of customers (a consecutive pair is one in which a pickup customer is followed directly by its delivery customer) and reorder the edges between pairs while considering the precedence constraint.
  • Remove one customer pair randomly and insert an unvisited one: Remove a customer pair randomly, select an unvisited customer pair that has a high insertion ratio (IR) using Equation (18), and insert it into the best position in the route.
I R c u s t o m e r p a i r = r c u s t o m e r p a i r c ( d e p o t , P C ) ,
where r is the revenue of the customer pair and c ( d e p o t e , P C ) is the distance between the depot and the pickup customer of that pair.
  • Remove a customer pair with a low IR and insert an unvisited pair with a high IR: Compute the IR for all visited pairs using Equation (18), and remove a customer pair with the lowest IR value. Then, recompute the IR for all unvisited customer pairs, select one with the highest IR value, and insert it into the best position in the route.
  • Insert a customer pair with a high IR: Compute the IR for all unvisited customer pairs, select one with the highest IR value, and insert it into the best position in the route.
After applying the selected neighbourhood operator, if the new solution S’ is feasible, the fitness of S’ is computed using Equation (1) and compared with the fitness of the old solution S. To escape from local optima, we decided to use the demon algorithm (DA) rather than a simple local search algorithm, since the DA has the ability to accept a feasible non-improved new solution. The DA is a variant of the SA that was presented by Creutz [19]. The acceptance function of the DA is based on the value of the demon (credit) that is initialised at the beginning with a given value D . In the case where the new solution S is not an improvement, the difference ( E ) between the fitness of S and S is compared with the demon value, and S is accepted if this difference is less than or equal to D . Then, if S is accepted, it is memorised, and the value of D is decreased; otherwise, the old solution S is memorised, the D value is increased, and a non-improvement counter for this solution is increased by one. The DA is simpler and faster than the SA because the latter requires more computational time for generating a random number and an exponential function. Also, it contains only one parameter that needs to be tuned. The DA proved to be an effective algorithm when it was used to solve several real-life problems, achieving good results in terms of the quality of solution and time, when compared with the SA [19].
2.
Onlooker bee phase:
Only the solutions that have good OF values have a high probability of being assigned to onlooker agents. Several strategies have been proposed for selecting solutions, such as roulette wheel selection, tournament selection, stochastic universal sampling, and rank-based selection [20]. We adopted the most popular one, which is the roulette wheel selection strategy, in which the probability of selection P i is computed as follows:
P i = O F i j = 1 n O F j ,
where O F i is the objective function for a solution i , and j = 1 n O F j is the sum of objective function values for all solutions in a population. After a solution is selected, one of the solution routes is selected randomly and all the neighbourhood operators used in the employed bee phase are applied sequentially to it, and then the fitness of the new solution is measured using Equation (1). To increase the intensification process and focus only on promising areas of the search space, the acceptance criterion is based on a straightforward LS (local search) algorithm. That is, if S is better than S , memorise S , and there is no need to complete the remaining operators; otherwise, try the next operator. If the old solution is not improved by any operator, the old solution is memorised, and a non-improvement counter for this solution is increased by one.
3.
Scout bee phase:
The scout bee phase aims to increase the diversification and expand the solution space, thus preventing getting stuck in local optima. The scout bee phase starts when a non-improvement counter for a solution S i reaches a predefined number of trials, called a limit. The limit refers to the number of attempts that were made to improve the solution S i , in both employed and onlooker bee phases, respectively. Three important actions should be performed when a non-improvement counter for solution S reaches the limit: (1) clear the non-improvement counter, (2) clear the solution S i and (3) reconstruct the solution S i . A suitable reconstruction strategy has an important role in the scout bee phase, where it either contributes to enhancing the quality of the solution or leaves the solution unchanged. Thus, we proposed two strategies to reconstruct the solution S i . The details of these reconstruction heuristics are presented as follows:
  • First strategy (ABC(S1)): One of the following three heuristics is selected randomly:
1.
Random insertion heuristic:
While the time constraint is not violated, the customer pairs are selected randomly and inserted directly into the route. Each pickup customer is followed by its delivery customer; therefore, there is no need to check the vehicle’s capacity and precedence constraints.
2.
Greedy and randomised heuristics:
While the time constraint is not violated, the customer pairs are selected randomly and inserted into the best positions in the route. In addition to the time constraint, the vehicle capacity and precedence constraints must not be violated.
3.
Greedy insertion heuristic:
The insertion ratio is computed for all customer pairs using Equation (18). The customer pair that has the highest IR value is then inserted into the route in the best position. Time, vehicle capacity, and precedence constraints must not be violated.
  • Second strategy (ABC(S2)): This strategy works exactly like the GRASP in [7].
After the solution S i is reconstructed, the OF is computed using Equation (1), and Si is memorised as a new solution and used to start again with the improvement process using the employed and onlooker bee phases.
The outline of the routing phase of our ABC is presented in Algorithm 1, where each notation is defined as follows: Pop Size represents the population size; Max Iter: the maximum number of iterations; Num Vehicles: the quantity of vehicles; US: pairs of customers who have not been served; E : the difference between OF values for an old and new solution; D : demon value; and N o n i m p r o v e m e n t : a counter to count the number of unsuccessful attempts to improve the solution.
Algorithm 1: The pseudocode of the proposed ABC.
Pop = S 1 , S 2 , S 3 , . . , S n ; /* Initial solutions that have been generated using GRASP(V2)*/
Limit = initial limit value, D = initial demon value, Non-improvment = 0 , Global_Best = Best solution in Pop;
For M = 1 to Max_Iter
/ * * *   E m p l o y e d   b e e   p h a s e   * * * /
For  i = 1 to Pop_Size
Select randomly one route of solution S i ;
Apply one randomly selected neighbourhood operator to the current route of solution
S i to obtain a new solution S i ;
If  S i is feasible Then Compute the O F of S i ;
End If
If  ( O F ( S i ) ) > O F ( S i ) )  
Memorise the new solution S i ;
Else if   ( E D i )
Memorise the new solution S i ;
update the demon value: D i = D i E ;
Else
Memorise the old solution S i ;
update the demon value: D i = D i + E ;
Non-improvement ( i ) = Non-improvement ( i ) + 1 ;
End If
End For
/ * * *   O n l o o k e r   b e e   p h a s e   * * * /
Select N solutions from the employed bees using roulette wheel selection strategy;
For   i = 1   to N
Select randomly one route of solution S i ;
For  j = 1 to number of neighbourhood operators
Apply a selected neighbourhood operator ( j ) to the current route of solution
S i to obtain a new solution S i ;
If  S i is feasible Then Compute the O F of S i ;
End If
If  ( O F ( S i ) > O F ( S i ) )  
Memorise the new solution S i ;
Break;
End if
End For
If the solution S i has not improved after applying the neighbourhood operators
Non-improvement ( i ) = Non-improvement ( i ) + 1 ;
End If
End For
/ * * *   S c o u t   b e e   p h a s e   * * * /
For  i = 1 to N
If Non-improvement ( i ) > Limit
Non-improvement ( i ) = 0 ;
S i   = 0 ;
Reconstruct S i   by using either the S1 or S2 strategy of scout bee phase; Compute the O F for S i ;
End if
End For
Assign the solution that has maximum OF to the Best_Solution;
If (Global_Best < Best_Solution)
Global_Best = Best_Solution;
End If
End For
Output: the Global_Best solution

4.2. ABC with Clustering (ABC(C))

In this part of the study, the ABC method described in Section 4.1 was used to improve the initial solutions that were constructed using the various GRASP(C) methods reported in [7], namely K-means-GRASP(C), modified-K-means GRASP(C), and ACO-GRASP(C). Again, there are some modifications that must be considered when using the ABC with those algorithms. When using the following operators, the insertion process should be restricted to the customer pairs inside the same cluster only: remove one customer pair randomly and insert an unvisited one, remove a customer pair with a low IR and insert an unvisited pair with a high IR, and insert a customer pair with a high IR. Also, the insertion restriction should be applied in the scout bee phase in both strategies (S1 and S2). We used the same parameter values that have been used in the ABC (without clustering) to improve the initial solutions in this part of the study. Moreover, the same strategies from the scout bee phase (S1 and S2) have been used in the ABC(C)) algorithms. Therefore, overall, we have six algorithms that use the ABC(C): (1) K-means-GRASP(C)-ABC(C)S1, (2) K-means-GRASP(C)-ABC(C)S2, (3) modified K-means-GRASP(C)-ABC(C)S1, (4) modified K-means-GRASP(C)-ABC(C)S2, (5) ACO-GRASP(C)-ABC(C)S1, and 6) ACO-GRASP(C)-ABC(C)S2. For example, the first algorithm refers to generating the initial solution using the GRASP after clustering the search space by using the K-means algorithm and then improving it using the ABC with reconstruction strategy S1.

5. Computational Experiments

All algorithms have been implemented in MATLAB (R2017b) on a laptop powered by an Intel Core i7-4510U processor operating at 2.00 GHz (2601 MHz, two cores, four logical processors). The following subsections provide descriptions of the datasets utilised and the specifics of parameter tuning prior to presenting the experimental results.

5.1. Test Instances

To test the performance of our proposed methods, we used the same instances used by [4]. The dataset comprises 36 instances, which have been categorised into three distinct groups: small size (consisting of 20 and 50 customers, respectively, served by two and three vehicles), medium size (comprising 100 and 250 customers, served by 4 and 5 vehicles, respectively), and large size (500 and 1000 customers served by 6 and 8 vehicles, respectively). Moreover, twelve instances comprise each group. These occurrences exhibit variability in terms of both time and revenue. To generate short and long routes, the total time limit is either small or large, with a range of 2500 to 15,000 s.
Furthermore, the revenue amounts are established in one of three ways: at random, proportional to demand, or equal for all customers. Revenue is acquired from the delivery customer subsequent to the shipment of merchandise originating from the pickup customer. The quantity of goods is specified as a positive integer between 1 and 50.

5.2. Parameter Tuning

5.2.1. Parameter Tuning of ABC without Clustering

The ABC algorithm has three parameters: the number of iterations (Max_Iter), the size of the population (Pop_Size), and the initial value of the limit. Additionally, since we used the DA in the employed bee phase, the initial value of the demon ( D ) needs to be tuned. The details of testing each parameter are presented below.

Number of Iterations

Each data instance of 100 customers was tested with different numbers of iterations: 10, 50, 100, 500, and 750. As seen in Table 1, the results of testing showed that there was no enhancement in the objective function values after 500 iterations. Thus, the maximum number of iterations was taken to be 500.

Size of Population

The same dataset instances were tested again to select the appropriate number of solutions in the population. At the beginning, the population size was set to 30, because it provided the best results given in [7]. Then, we increased the population to different sizes: 40, 50, and 60 solutions. Table 2 presents the values of the OF after setting the number of iterations to 500 . The results showed that increasing the population size to more than 50 did not lead to an improvement in the objective function value. Thus, the population size was taken to be 50.

Limit Value

The limit value of the non-improvement counter is another important parameter of the ABC algorithm because it affects the conversion of the search process. Thus, the value selected for the limit should be suitable for the number of iterations. Therefore, we tested the limit using two values: 100 and 200. The results in Table 3 show that increasing the limit value to more than 100 did not contribute to an improvement in the objective function value. Thus, the limit value was taken to be 100.

Demon Value

Several values have been tested to select the best initial value of the demon ( D ): 1000, 2500, 5000, and 7500. The results in Table 4 indicated no improvement in objective function if the demon value was greater than 5000. Thus, the demon value D was taken to be 5000.

5.3. Parameter Tuning of ABC(C)

Since we used the same ABC algorithm, with little modification, in the ABC with the clustering approach, we adopted the same parameter values that have been used in the ABC for our ABC(C).

5.4. Experimental Results

As previously mentioned, we proposed two versions of the ABC to improve the initial solutions: the ABC used with GRASP, and the ABC(C) used with the clustering methods, i.e., K-means-GRASP(C), modified K-means-GRASP(C), and ACO-GRASP(C). We also have two separate strategies for the scout bee phase. S1 constructs the solution by randomly selecting one of three strategies: random insertion, greedy, and randomised insertion or greedy insertion. After testing this strategy, we found the random insertion heuristic did not improve the results compared to others, so we eliminated this choice and kept the other two. The second strategy, S2, works like the GRASP in [7]. Therefore, we have eight overall proposed methods named as follows: (1) ABC(S1), (2) ABC(S2), (3) K-means-GRASP(C)-ABC(C)S1, (4) K-means-GRASP(C)-ABC(C)S2, (5) modified K-means-GRASP(C)-ABC(C)S1, (6) modified K-means-GRASP(C)-ABC(C)S2, (7) ACO-GRASP(C)-ABC(C)S1, and (8) ACO-GRASP(C)-ABC(C)S2. All proposed algorithms were run five times, as performed with the previous algorithms in [6] and [7]. We compared our proposed algorithms and the state-of-the-art algorithms presented in [10,11], which are C1-Seq VNS, C2-Seq VNS, C1-Self Adaptive VNS, C2-Self Adaptive VNS and IPPD, for solving the MVPPDP in terms of the processing time and profits gained.
First, with respect to the processing time, all previous algorithms have been run for 5 min for each instance. We note, though, that this time is not considered acceptable for the m-commerce apps (e.g., Hungerstation, MRSOOL, etc.) that are used daily and rely on the speed of responding, so in our case, we used the stopping condition of 500 iterations to measure the processing time. Table 5 presents the comparison results between our proposed algorithms and state-of-the-art algorithms in terms of time (in seconds). Also, the average of each category (highlighted rows in Table 5) has been used to compare the performance of the algorithms as shown in Figure 2. From Table 5 and Figure 2, it is evident that our proposed clustering-based algorithms outperform all previous algorithms with respect to processing time in all instances, except for instances involving 1000 customers. Also, the other proposed algorithms that are not based on clustering (ABC(S1) and ABC(S2)) were faster than the previous algorithms for small- and medium-sized instances. The reason why our algorithms do not excel in large-sized instances is probably due to the scout bee strategies that have been used, namely the greedy method in the first strategy and the GRASP in the second. In the greedy method, the customer pairs are selected based on a specific measure and then inserted into the best position that leads to decreasing the travel cost. This approach gives good results but consumes time, especially in large-sized instances. In the second strategy, the unvisited customer pairs are selected based on computing the IR, which also leads to consuming time in large-sized instances. Overall, though, the clustering of the search space clearly helped to process large-sized instances and made the processing time faster compared to other algorithms that deal with all customers without clustering.
On the other hand, we compared our proposed algorithms and the previous algorithms in terms of profit as computed in Equation (1). Table 6 presents the average profit for five runs, while Table 7 presents the best profit obtained over five runs and the gap between the best profit and the best known profit using the following equation:
G a p = B e s t   p r o f i t   o v e r   f i v e   r u n s B e s t   k n o w n   p r o f i t B e s t   k n o w n   p r o f i t 100
The best-known profit was computed in [9] by running each instance for 20 h, and then the best profit was saved as the best-known profit. As can be seen from Table 6 and Table 7, our proposed algorithms perform well in terms of profit for small-size instances, with the ABC(S1) returning six new best solutions and two other solutions that were equal to the best-known solutions. The ABC(S2) also achieved three new best solutions and three solutions that are equal to the best-known solution. Also, the modified K-means-GRASP(C)-ABC(C)S1 achieved two new best solutions. In addition, the K-means-GRASP(C)-ABC(C)S1, K-means-GRASP(C)-ABC(C)S2, modified K-means-GRASP(C)-ABC(C)S2, and ACO-GRASP(C)-ABC(C)S2 each produced one new best solution. Overall, the general performance for all proposed algorithms was acceptable and close to the previous algorithms in terms of average profit on these instances.
For the medium- and large-sized instances, though, our proposed algorithms did not do well compared to previous methods, as evident by the large average gap percentage compared to the best-known results shown in Table 6. There are many possible reasons for this:
  • Again, the methods used in the scout phase of the ABC seem to have an effect on the performance of the proposed algorithms. The first is the greedy randomised strategy, which selects a random customer pair, meaning the worst pair (either with low revenue or separated by a great distance) may be selected, resulting in poor solution quality. Moreover, in some cases, the neighbourhood operators cannot sufficiently enhance the quality of the solution because of time violations or load constraints. The second method uses the GRASP, which constructs solutions by randomly selecting from the Restricted Candidate List (RCL), which is half the size of the Candidate Solution Set (CSS). Thus, selecting unvisited customers is not a problem for small-sized instances because the number of customer pairs is small, which means the chance of selecting the best customer pair (i.e., that with the highest IR2 value) is high. In contrast, in medium- and large-sized instances, where the size of the RCL is large, the chance of selecting the best customer pair decreases, which affects the quality of the solutions.
  • The number of solutions in a population, the number of iterations, and the limit value play important roles in the algorithms’ performance. In our algorithms, we rely on small numbers of both solutions and iterations because a fast processing time is a critical demand for daily m-commerce apps. These decreased iteration numbers and population sizes, however, reduce the chances of achieving satisfactory solution quality.
  • All previous algorithms were based on VNS, which has the ability to change the size of the neighbourhood search each time the solution quality is not enhanced. Thus, expanding the neighbourhood structure in a systematic way helps the VNS fully explore the neighbourhood and obtain good solutions. In contrast, the diversification feature in our approach is based either on the greedy feature (inserting the customer pair with the highest IR into the best position) or on a combination of greedy and randomised features (inserting a random customer pair into the best position or inserting a random customer pair from RCL into the best position). However, these strategies seem less successful at diversification than the VNS, which relies on an organised neighbourhood exploration mechanism.
Figure 3 compares our proposed algorithms based on the average of the best solutions for each category of instances (highlighted rows in Table 6). It is clear that the ABC in both versions (S1 and S2) still has the preference over all the other algorithms for the instances comprising 20, 50, 100, 250, and 500 customers. In contrast, both versions of K-means-GRASP(C)-ABC(C) and modified K-means-GRASP(C)-ABC(C) outperform the GRASP(V2)-ABC in both versions (S1 and S2) for the 1000-customer instances. The main reasons for the poor performance of clustering algorithms in small- and medium-sized instances is the distribution of customer pairs in K-means and modified K-means, where the clustering process may result in putting a small number of customers within a cluster, which means that the vehicle still has remaining time to serve, but no more customers exist within that cluster. Thus, we conclude that the clustering algorithms (except ACO) do well in large-sized instances only.
Regarding the ACO-GRASP(C)-ABC(C)(S1 and S2), it is clear that it has the worst performance among all algorithms. Again, the clustering method of ACO has a significant effect on results. Specifically, the randomness feature of ACO contributes to an inaccurate clustering of customer pairs.
In addition, as seen in the highlighted rows in Table 7, we noticed that the second strategy for scout bees outperforms the first one in many instances because the first strategy includes selecting either the best customer pairs or random pairs, and the random selection option may affect the quality of the solution, while the second strategy selects customer pairs randomly from the RCL, which contains the best customer pairs in the CSS.

6. Conclusions

In this paper, we present eight new metaheuristics to improve the initial solutions for the multi-vehicle profitable pickup and delivery problem (MVPPDP). We proposed two versions of the ABC (Artificial Bee Colony) to improve the initial solutions: the ABC with GRASP and the ABC (C) with GRASP and clustering, i.e., K-means-GRASP (C), modified K-means-GRASP (C), and ACO-GRASP (C). We also have two separate strategies for the scout bee phase. The first strategy, S1, constructs the solution by randomly selecting one of two strategies: greedy and randomised insertion or just greedy insertion. The second strategy, S2, is based on the GRASP. Thus, we have eight different proposed methods. When compared to the state-of-the-art methods in terms of processing time, all eight methods have preference in all instances (except for the 1000-customer instances). Moreover, all proposed algorithms performed well for small-sized instances in terms of profits by achieving thirteen new best solutions and five equal solutions to the best-known solutions. However, the algorithms slightly lag behind in medium- and large-sized instances due to the greedy randomised strategy and the GRASP that have been used in the scout bee phase. Moreover, our algorithms prioritise minimal solutions and iterations for rapid processing times in daily m-commerce apps, while reducing iteration counts and population sizes reduces the likelihood of obtaining good solution quality.
For future research, more neighbourhood operators will be carefully investigated in the scout bee phase to be applicable within the constraints of the MVPPDP, thus increasing the chances of producing high-quality solutions. Moreover, we will try other population-based metaheuristics, such as GAs, evolutionary algorithms, or other swarm-based metaheuristics that may contribute to new solutions for the MVPPDP.

Author Contributions

Conceptualization, A.I.A.; methodology, A.I.A.; formal analysis, A.I.A.; data curation, A.I.A.; Writing—original draft, A.I.A.; funding acquisition, A.I.A.; Supervision, M.I.H.; writing—review and editing, M.I.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Acknowledgments

The Researchers would like to thank the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support (QU-APC-2024-9/1).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Curry, D. Food Delivery App Revenue and Usage Statistics. 2024. Available online: https://www.businessofapps.com/data/food-delivery-app-market/ (accessed on 24 January 2024).
  2. Khalid, H. Benefits of Food Delivery App for Restaurants and Customers. Available online: https://enatega.com/benefits-of-food-delivery-app/ (accessed on 1 February 2024).
  3. Team, D.J. Top Food Delivery Apps in Saudi Arabia 2023. Available online: https://www.digitalgravity.ae/blog/top-food-delivery-apps-in-saudi-arabia/ (accessed on 2 January 2024).
  4. Gansterer, M.; Küçüktepe, M.; Hartl, R.F. The multi-vehicle profitable pickup and delivery problem. OR Spectr. 2017, 39, 303–319. [Google Scholar] [CrossRef]
  5. Talbi, E.G. Metaheuristics: From Design to Implementation; John Wiley & Sons: Hoboken, NJ, USA, 2009; ISBN 978-0-470-27858-1. [Google Scholar]
  6. Alhujaylan, A.I.; Hosny, M.I. A GRASP-based solution construction approach for the multi-vehicle profitable pickup and delivery problem. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 111–120. [Google Scholar] [CrossRef]
  7. Alhujaylan, A.I.; Hosny, M.I. Hybrid Clustering Algorithms with GRASP to Construct an Initial Solution for the MVPPDP. Comput. Mater. Contin. 2020, 62, 1025–1051. [Google Scholar] [CrossRef]
  8. Archetti, C.; Speranza, M.G.; Vigo, D. Vehicle routing problems with profits. In Vehicle Routing: Problems, Methods, and Applications, 2nd ed.; SIAM: Philadelphia, PA, USA, 2014; Chapter 10; pp. 273–297. ISBN 978-1-61197-358-7. [Google Scholar]
  9. Toth, P.; Vigo, D. Vehicle Routing: Problems, Methods, and Applications, 2nd ed.; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2015; ISBN 1611973589. [Google Scholar]
  10. Küçüktepe, M. A General Variable Neighbourhood Search Algorithm for the Multi-Vehicle Profitable Pickup and Delivery Problem. University of Vienna 2014. Available online: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=A+General+Variable+Neighbourhood+Search+Algorithm+for+the+Multi-Vehicle+Profitable+Pickup+and+Delivery+Problem.+&btnG= (accessed on 22 June 2024).
  11. Haddad, M.N. An Efficient Heuristic for One-to-One Pickup and Delivery Problems. Ph.D. Thesis, IC/UFF, Niterói, Brazil, 2017. [Google Scholar]
  12. Bruni, M.E.; Toan, D.Q. The multi-vehicle profitable pick up and delivery routing problem with uncertain travel times. Transp. Res. Procedia 2021, 52, 509–516. [Google Scholar] [CrossRef]
  13. Karaboga, D. An Idea based on Honey Bee Swarm for Numerical Optimization. Technical Report-tr06, Erciyes University, Engineering Faculty, Computer Engineering Department. 2005. Available online: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=An+idea+based+on+honey+bee+swarm+for+numerical+optimization&btnG= (accessed on 22 June 2024).
  14. Alazzawi, A.K.; Rais, H.M.; Basri, S.; Alsariera, Y.A. PhABC: A Hybrid Artificial Bee Colony Strategy for Pairwise test suite Generation with Constraints Support. In Proceedings of the 2019 IEEE Student Conference on Research and Development (SCOReD), Bandar Seri Iskandar, Malaysia, 15–17 October 2019; pp. 106–111, ISBN 9781728126135. [Google Scholar]
  15. Szeto, W.Y.; Wu, Y.; Ho, S.C. An artificial bee colony algorithm for the capacitated vehicle routing problem. Eur. J. Oper. Res. 2011, 215, 126–135. [Google Scholar] [CrossRef]
  16. Wu, B.; Shi, Z. A clustering algorithm based on swarm intelligence. In Proceedings of the 2001 International Conferences on Info-Tech and Info-Net. Proceedings (Cat. No.01EX479), Beijing, China, 29 October–1 November 2001; pp. 58–66, ISBN 0-7803-7010-4. [Google Scholar]
  17. Yao, B.; Hu, P.; Zhang, M.; Wang, S. Artificial bee colony algorithm with scanning strategy for the periodic vehicle routing problem. Simulation 2013, 89, 762–770. [Google Scholar] [CrossRef]
  18. Karaboga, D.; Akay, B. A survey: Algorithms simulating bee swarm intelligence. Artif. Intell. Rev. 2009, 31, 61–85. [Google Scholar] [CrossRef]
  19. Creutz, M. Microcanonical monte carlo simulation. Phys. Rev. Lett. 1983, 50, 1411. [Google Scholar] [CrossRef]
  20. Bansal, J.C.; Sharma, H.; Jadon, S.S. Artificial bee colony algorithm: A survey. Int. J. Adv. Intell. Paradig. 2013, 5, 123–159. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the Artificial Bee Colony algorithm [14].
Figure 1. Flowchart of the Artificial Bee Colony algorithm [14].
Algorithms 17 00331 g001
Figure 2. Comparison between state-of-the-arts algorithms and the proposed algorithms based on time (seconds).
Figure 2. Comparison between state-of-the-arts algorithms and the proposed algorithms based on time (seconds).
Algorithms 17 00331 g002
Figure 3. Comparison the proposed algorithms based on the average of best profits for each category.
Figure 3. Comparison the proposed algorithms based on the average of best profits for each category.
Algorithms 17 00331 g003
Table 1. Results of tuning the number of iterations of the ABC.
Table 1. Results of tuning the number of iterations of the ABC.
Dataset
Instances
Number of Iterations
1050100500750
13-100-F-S32,274.0435,432.9839,437.9839,438.5839,438.58
14-100-F-L63,311.8663,311.8663,311.8663,455.1363,455.13
15-100-P-S68,625.4868,625.4871,185.2872,032.2272,032.22
16-100-P-L115,792.8115,792.8124,608.2131,404.9131,404.9
17-100-R-S67,974.5674,246.2774,246.2774,330.9574,330.95
18-100-R-L112,986.9112,986.9116,775.2119,991.9119,991.9
Table 2. Results of tuning the population size of the ABC.
Table 2. Results of tuning the population size of the ABC.
Dataset
Instances
Population Size
30405060
13-100-F-S39,438.5839,438.5841,472.8641,472.86
14-100-F-L63,455.1366,453.6666,453.6666,453.66
15-100-P-S72,032.2272,032.2272,170.5472,170.54
16-100-P-L131,404.9131,404.9131,404.9131,404.9
17-100-R-S74,330.9577,484.0677,484.0677,484.06
18-100-R-L119,991.9120,249.7126,504.3126,504.3
Table 3. Results of tuning the limit value of the ABC.
Table 3. Results of tuning the limit value of the ABC.
Dataset
Instances
Limit
100200
13-100-F-S41,472.8641,472.86
14-100-F-L66,453.6666,453.66
15-100-P-S72,170.5472,170.54
16-100-P-L131,404.9131,404.9
17-100-R-S77,484.0677,484.06
18-100-R-L126,504.3126,504.3
Table 4. Results of tuning the demon value of the ABC.
Table 4. Results of tuning the demon value of the ABC.
Dataset InstancesDemon Value
1000250050007500
13-100-F-S41,472.8641,596.3341,954.8641,954.86
14-100-F-L66,453.6666,683.8466,683.8466,683.84
15-100-P-S72,170.5472,449.5672,449.5672,449.56
16-100-P-L131,404.9131,404.9131,404.9131,404.9
17-100-R-S77,484.0677,484.0677,484.0677,484.06
18-100-R-L126,504.3126,504.3126,504.3126,504.3
Table 5. Comparison between state-of-the-art algorithms and the proposed algorithms based on time (seconds).
Table 5. Comparison between state-of-the-art algorithms and the proposed algorithms based on time (seconds).
Dataset Previous Algorithms(5 min Runtime)K-Means-GRASP-ABC(C)S1K-Means-GRASP-ABC(C)S2Modified K-Means-GRASP-ABC(C)S1Modified K-Means-GRASP-ABC(C)S2ACO-GRASP-ABC(C)S1ACO-GRASP-ABC(C)S2ABC(S1)ABC(S2)
1-20-F-S 30014.4814.4715.2014.5816.7812.1231.0430.65
2-20-F-L 30012.1312.4213.8213.1713.5111.4530.2030.56
3-20-P-S 30016.6217.4817.8317.1718.7215.8533.7739.28
4-20-P-L 3001212.0112.6112.2415.6312.9824.4328.45
5-20-R-S 30014.8715.2114.9714.5716.6714.5730.7931.17
6-20-R-L 30011.3212.3813.4713.7113.6312.2522.6221.56
Average-20 30013.571414.6514.2415.8213.2028.8130.28
7-50-F-S 30023.2424.2925.6725.4925.4522.0350.1054.80
8-50-F-L 30019.7921.2421.5321.4522.1820.6543.8146.63
9-50-P-S 30025.6724.6426.5825.1525.8720.4950.5554.72
10-50-P-L 30021.6322.6424.4122.9623.3420.9044.7748.77
11-50-R-S 30026.3126.5826.8626.0527.4920.2348.9051.94
12-50-R-L 30022.7320.9323.3721.8722.7720.9145.9448.34
Average-50 30023.2323.3924.7423.8324.5220.8747.3450.86
13-100-F-S 30037.8733.5736.5835.1036.8933.3680.6583.81
14-100-F-L 30031.4233.6734.5934.3835.5533.1473.9081.74
15-100-P-S 30031.0332.0336.9835.5536.7533.2376.3982.69
16-100-P-L 30033.5434.3336.5034.5736.2032.4174.1982.93
17-100-R-S 3003835.923735.8138.1934.0382.5986.96
18-100-R-L 30036.1935.3734.3833.3437.4933.6377.0281.01
Average-100 30034.6834.1536.0134.7936.8533.3077.4683.19
19-250-F-S 30073.5072.8578.1073.3074.1967.79176.81204.37
20-250-F-L 30079.6573.2880.5076.2877.6665.10181.10195
21-250-P-S 30074.2072.0975.2372.8874.0563.51169.26195.17
22-250-P-L 30073.8272.71879.5374.4077.8665.44179193.60
23-250-R-S 30074.6772.9977.6372.3876.2166.44177.25193.93
24-250-R-L 30079.5573.2184.0473.8479.6269.46178.99194.47
Average-250 30075.9072.8679.1773.8576.6066.29177.07196.09
25-500-F-S 300163.89155.32172.74158.36162.15142.52388.62474.16
26-500-F-L 300167.03155.73177.18160.25170.25148.98415.25466.38
27-500-P-S 300160.91146.27172.04158.45162.76140.62386.55465.87
28-500-P-L 300168.60144.23180.26158.74169.89141.34435.34464.62
29-500-R-S 300170.15140.16170.50158.58162.42139.80383.89468.15
30-500-R-L 300169.78141.24174.22161.07169.30143.36428.65464.89
Average-500 300166.73147.16174.49159.24166.13142.77 406.38 467.35
31-1000-F-S 300432.51354.73448.29408.67426.32397.221040.221406.22
32-1000-F-L 300439.85360.11447.72423.01461.18430.541197.381438.43
33-1000-P-S 300434.34364.52432.93418.51391.01416.761128.141398.49
34-1000-P-L 300448.06378.57440.70435.69389.93430.391204.971416.64
35-1000-R-S 300451.79404.20425.83421.47381.24411.311110.801296.40
36-1000-R-L 300420.15410.26446.23418.03393.29420.101222.331286.07
Average-1000 300437.78378.73440.28420.90407.16417.721150.641373.71
Table 6. Comparison of our proposed algorithms with previous methods based on average profit.
Table 6. Comparison of our proposed algorithms with previous methods based on average profit.
Dataset C1—Seq VNS [9]C2—Seq VND [9]C1—Self Adaptive VND [9]C2—Self Adaptive VND [9]IPPD [10]K-Means-GRASP-ABC(C)S1K-Means-GRASP-ABC(C)S2Modified K-Means-GRASP-ABC(C)S1Modified K-Means-GRASP-ABC(C)S2ACO-GRASP-ABC(C)S1ACO-GRASP-ABC(C)S2ABC(S1)ABC(S2)
1-20-F-S 30,315.530,315.530,315.530,315.530,315.530,267.7530,427.5827,007.5827,007.5825,490.7625,490.7630,395.9230,294.77
2-20-F-L 38,765.638,765.637,866.338,765.639,179.6637,642.3837,468.8239,773.3439,240.9536,342.5336,322.3339,459.1536,153.93
3-20-P-S 51,473.452,128.452,565.152,565.152,56651,481.843,340.7151,481.843,009.5542,917.3434,107.6452,684.9852,565.1
4-20-P-L 58,365.658,365.658,365.658,365.658,77757,350.3557,086.3657,350.3557,301.8856,523.3247,367.5458,888.6456,808.21
5-20-R-S 36,469.536,469.536,469.536,469.536,469.534,090.5834,090.5830,237.0229,843.5831,741.0831,741.0836,469.5236,469.52
6-20-R-L 43,368.643,368.643,368.643,368.643,781.8142,215.5342,086.4444,401.4543,736.5340,769.6740,450.5544,244.2742,143.26
Average-20 43,126.3743,235.5343,158.4343,308.3243,514.9142,174.7340,750.0841,708.5940,023.3538,964.1235,913.3243,690.4142,405.8
Standard Deviation 10,289.9410,399.1010,559.1410,475.0410,560.1310,436.429349.2311,822.8110,907.8210,656.367500.0110,573.4010,312.25
7-50-F-S 22,30622,30622,30622,30622,307.1518,662.8321,603.8318,662.8321,603.8321,281.8825,123.1524,975.2524,998.45
8-50-F-L 58,050.858,050.857,303.457,996.758,050.850,772.6450,873.6850,881.5550,773.5446,699.3147,534.7953,555.2553,268.82
9-50-P-S 57,651.757,651.757,651.757,651.757,651.758,996.1753,258.8358,996.1753,064.1455,643.2555,643.2557,651.7157,651.71
10-50-P-L 132,648132,648,0128,353128,334.8129,640.3109,726.8117,770.8116,372116,149.797,740.02106,491.9124,819.3128,477.1
11-50-R-S 33,340.433,340.433,340.433,340.433,340.428,865.4828,907.3328,865.4828,907.3331,251.3429,969.1633,054.5733,098.58
12-50-R-L 79,350.379,350.377,655.579,350.377,832.5676,422.6276,503.2876,309.4676,408.9164,359.5564,519.7976,582.1575,732.86
Average-50 63,891.250,139.8462,768.3363,163.3263,137.1657,241.0958,152.9558,347.9257,817.9152,829.2254,880.3561,773.0362,204.58
Standard Deviation 39,248.4522,523.3637,635.6937,749.5438,077.6133,048.6335,097.3135,184.3734,547.7227,041.6829,384.5835,937.4137,167.44
13-100-F-S 41,464.8441,955.241,523.542,17441,896.8635,386.2436,055.0135,303.6736,625.933,147.9233,187.5740,596.7740,543.56
14-100-F-L 81,271.281,97180,080.480,500.279,479.8858,704.760,780.7457,607.0760,035.4351,106.5847,074.9864,937.4767,237.63
15-100-P-S 70,387.470,398.571,113.871,082.869,436.2554,19757,206.649,566.3659,481.8460,614.7845,995.0968,874.0271,290.33
16-100-P-L 124,288123,931122,691.8122,183.2125,05798,140.3104,031.6100,122.4110,09992,681.3896,365.75112,690.5124,487.1
17-100-R-S 76,885.478,418.876,954.479,651.277,012.7561,579.8161,579.8161,960.7861,698.8452,689.752,163.7277,334.875,925.47
18-100-R-L 131,052.6130,670.4130,403.4130,366.6131,058.698,522.81100,291.3104,771.9109,450.292,423.5493,867.6115,108.7122,320.4
Average-100 87,558.2487,890.8287,127.8887,659.6787,323.5667,755.1469,990.8468,222.0372,898.5463,777.3261,442.4579,923.783,634.09
Standard Deviation 34,09933,674.3633,546.833,090.4534,316.0125,383.1526,636.5928,060.8630,021.6724,032.0426,834.729,022.8933,175.38
19-250-F-S 73,259.9472,445.372,70372,287.970,799.1936,045.9432,341.3543,729.2938,692.8134,038.3133,942.5748,988.6246,216.19
20-250-F-L 132,996132,438132,394.4133,404129,485.354,144.6463,245.1270,325.7481,370.7951,239.4857,273.971,397.9181,288.1
21-250-P-S 111,334.4111,817.8109,449.4109,355.8110,440.362,245.0571,066.4474,931.583,992.5572,198.269,410.9786,624.3293,568.56
22-250-P-L 197,899.8197,214.2194,131.2194,549194,338.6112,454.5131,917.5133,567.4155,750.395,906.32127,789.5146,104160,301.3
23-250-R-S 162,410.8162,448.2158,164.4159,129.6158,414.699,90090,042.33109,717.893,812.47102,500.199,734.68125,201.2120,776
24-250-R-L 269,072.8268,713.8266,559267,291.4266,204.2148,156.7176,579.9180,807.7177,336146,270.4164,012.2204,037.4203,859.8
Average-250 157,829157,512.9155,566.9156,003154,94785,491.1494,198.78102,179.9105,159.283,692.1492,027.3113,725.6117,668.3
Standard Deviation 69,184.169,165.7168,327.7668,666.6968,777.7442,022.4352,050.3749,795.5151,632.1540,17448,193.9856,662.6257,036.7
25-500-F-S 158,425.4159,146.2157,399.8159,526.6155,683.959,737.2771,001.3151,581.3873,939.0354,762.9751,117.3975,731.579,862.1
26-500-F-L 243,210244,591.6245,600.8245,183.6242,173.1101,753.4120,779.3106,505.3129,200.462,041.1988,256.7112,172121,824.9
27-500-P-S 201,195.4202,102.2197,227.6198,740203,932.9105,678.8146,631.6107,797.6143,918.5102,996.9105,667.8130,361.2143,638.1
28-500-P-L 319,239.2321,181.6315,854.6317,610.8329,642.8229,525.6241,331.4225,812.4243,361.8152,573.5191,169.5201,973.8222,042.1
29-500-R-S 271,154.4272,715.2271,264.2273,095260,505.8126,298.6156,901.1108,998.5162,660.4127,089.1123,505.3166,111.6166,226.9
30-500-R-L 402,557.2406,689.6399,990.8404,517408,819.4259,683.6272,509241,454282,087167,819.5205,885.1239,675.7255,261.1
Average-500 265,963.6267,737.7264,556.3266,445.5266,793147,112.9168,192.3140,358.2172,527.9111,213.8127,600.3154,337.6164,809.2
Standard Deviation 69,184.169,165.7168,327.7668,666.6968,777.7442,022.4352,050.3749,795.5151,632.1540,17448,193.9856,662.6257,036.7
31-1000-F-S 150,920.4151,943.2151,990153,820.6152,358.733,184.0348,758.3539,530.2248,457.5723,266.520,723.0533,125.1941,340.89
32-1000-F-L 218,211218,213223,999.8222,072.8222,089.192,970.3593,386.2392,142.1595,500.8824,110.8544,359.3557,137.2468,705.69
33-1000-P-S 439,681.8441,120.2433,353.8429,853457,759.4272,126.9288,095.1236,797.1280,912.6178,654.3201,178.4240,944.9263,540.2
34-1000-P-L 678,396.2651,754.6677,511.6669,212.8681,370.5498,835.3519,115.5492,753.1507,105.4346,244.9366,004.4397,930426,372.5
35-1000-R-S 455,054.6461,394459,167.4462,999.2461,919.6207,134255,835.4192,030.8242,210.5124,538.1176,577.1212,979.1228,716
36-1000-R-L 668,655662,605.4668,615.4667,184.8676,159.7464,549.3450,834.3407,703.7448,210.5297,212.8322,568.7364,643.1372,491.4
Average-1000 435,153.2431,171.7435,773434,190.5441,942.8261,466.7276,004.1243,492.9270,399.6165,671.2188,568.5217,793.2233,527.8
Standard Deviation 219,981.4212,861.8218,443.1216,320221,371.6190,345187,196176,803.2183,520.7135,712.2140,506.7151,318.2155,901.2
Table 7. Comparison between the proposed algorithms and previous methods based on the best solution and gap values.
Table 7. Comparison between the proposed algorithms and previous methods based on the best solution and gap values.
Dataset Best Known ProfitC1—Seq VNS [4]C2—Seq VND [4]C1—Self Adaptive VND [4]C2—Self Adaptive VND [4]IPPD [10]K-Means-GRASP-ABC(C)S1
Best ProfitGapBest ProfitGapBest ProfitGapBest ProfitGapBest ProfitGapBest ProfitGap
1-20-F-S 30,315.530,315.5030,315.5030,315.5030,315.5030,315.5030,267.75−0.15
2-20-F-L 39,340.338,765.6−1.538,765.6−1.538,765.6−1.538,765.6−1.539,179.66−0.4137,750.35−4
3-20-P-S 52,855.251,473.42.752,561.1−0.652,565.1−0.652,565.1−0.652,566−0.5551,515.53−2.5
4-20-P-L 58,365.658,365.6058,365.6058,365.6058,365.6058,7770.757,350.35−1.7
5-20-R-S 36,469.536,469.5036,469.5036,469.5036,469.5036,469.5034,090.58−6.5
6-20-R-L 43,764.343,368.6−0.943,368.6−0.943,368.6−0.943,368.6−0.943,781.810.0442,353.35−3.2
Average-20 43,518.443,126.37−0.8543,307.65−0.5043,308.32−0.5043,308.32−0.5043,514.91−0.0442,221.32−2.9
7-50-F-S 22,44122,306−0.622,306−0.622,306−0.622,306−0.622,307.15−0.618,662.83−16.8
8-50-F-L 58,050.858,050.8058,050.8058,050.8058,050.8058,050.8051,040.14−12
9-50-P-S 57,651.757,651.7057,651.7057,651.7057,651.7057,651.7058,996.172.3
10-50-P-L 132,648132,6480132,6480132,6480132,6480132,6480114,962.9−13.3
11-50-R-S 33,340.433,340.4033,340.4033,340.4033,340.4033,340.4028,907.33−13.2
12-50-R-L 79,350.379,350.3079,350.3079,350.3079,350.3077,832.56−1.9576,479.84−3.6
Average-50 63,913.763,891.20−0.163,891.20−0.1063,891.20−0.1063,891.20−0.1063,638.44−0.4358,174.87−8.9
13-100-F-S 44,402.342,174.4−5.342,174.4−5.342,174.4−5.342,215.5−5.241,956.25−5.8336,322.94−18.1
14-100-F-L 81,999.481,999.4081,999.4081,621−0.580,848.1−0.288,891.4060,767.96−25.8
15-100-P-S 71,512.470,836−171,326.4−0.371,399.2−0.271,399.2−0.271,234.58−0.3954,428.34−23.8
16-100-P-L 125,032124,770−0.2124,770−0.2123,766−1124,910−0.1125,786.7−0.699,860.9−20.1
17-100-R-S 80,878.880,878.8078,818−2.680,878.8080,878.8080,878.8061,647.98−23.7
18-100-R-L 131,845131,473−0.3131,8450131,339−0.4131,339−0.4131,8450102,403.8−22.3
Average-100 89,278.3297,991.44−1.1388,488.87−1.4088,529.73−1.2388,598.43−1.0290,098.79−1.1469,238.65−22.4
19-250-F-S 75,309.173,610.9−2.373,575.7−2.473,835.5273,544,7−2.472,038.6−4.5439,672.94−47.3
20-250-F-L 136,594134,889−1.3134,186−1.8132,870−2.8135,622−0.7131,012.9−4.2654,144.64−60.3
21-250-P-S 113,886112,427−1.3112,480−1.3112,428−1.3110,062−3.5112,747.3−1.0169,574.44−38.9
22-250-P-L 200,402200,4020198,611−0.9195,135−2.7196,524−2198,241.2−1.09112,454.5−43.8
23-250-R-S 164,070162,960−0.7164,0700162,115−1.2162,115−1.2161,804.7−1.4103,456−36.9
24-250-R-L 274,563271,185−1.2271,193−1.2272,957−0.6269,790−1.8269,734.7−1.79148,156.7−46
Average-250 160,804159,245.65−1.13159,019.28−1.27158,223.42−1.10174,822.60−1.93157,596.57−2.3587,909.87−45.3
25-500-F-S 167,080161,063−3.7161,040−3.8159,064−5161,375−3.5160,592.1−4.0459,737.27−64.2
26-500-F-L 259,319249,171−4.1249,661−3.9247,986−4.6249,312−4249,105.7−4.1107,852.1−58.4
27-500-P-S 206,788203,832−1.5205,480−0.6200,764−3201,590−2.6208,287.70.72105,678.8−48.8
28-500-P-L 331,291321,471−3.1325,522−1.8320,872−3.2321,155−3.2337,358.91.82229,525.6−30.7
29-500-R-S 281,711279,105−0.9278,233−1.3274,419−2.7276,392−1.9265,040−6.29126,298.6−55.1
30-500-R-L 417,282407,689−2.4409,755−1.8407,849−2.3409,262−2411,967.6−1.29259,683.6−37.7
Average-500 277,245.2270,388.50−2.61271,615.17−2.20268,492.33−3.47269,847.67−2.87272,058.67−2.20165,807.7−40.1
31-1000-F-S 173,826154,196−12.7151,943.2−12.2154,357−12.6156,106−11.4152,358.7−11.833,184.03−80.9
32-1000-F-L 247,385221,958−11.5221,322−11.8224,999−9.9226,913−9222,089.1−10.192,970.35−62.4
33-1000-P-S 464,580450,349−3.2450,337−3.2442,311−5442,218−5.1457,759.41.47272,126.9−41.4
34-1000-P-L 712,918693,407−2.8672,910−5.9684,121−4.2683,211−4.3681,370.5−3.26498,835.3−30
35-1000-R-S 492,083463,625−6.1472,626−4.1465,022−5.8469,162−4.9461,919.6−4.90207,134−57.9
36-1000-R-L 705,167673,951−4.6669,926−5.3676,720−4.4684,217−3.1676,159.7−2.90464,549.3−34.1
Average-1000 465,993.2405,958−6.8439,844.03−7.08441,255−6.98443,638−6.3441,942.83−5.2261,466.65−43.8
Dataset K-Means-GRASP-ABC(C)S2Modified K-Means-GRASP-ABC(C)S1Modified K-Means-GRASP-ABC(C)S2ACO-GRASP-ABC(C)S1ACO-GRASP-ABC(C)S2ABC(S1)ABC(S2)
Best ProfitGapBest ProfitGapBest ProfitGapBest ProfitGapBest ProfitGapBest ProfitGapBest ProfitGap
1-20-F-S 30,427.580.327,007.58−10.927,007.58−10.925,490.76−15.925,490.76−15.930,554.620.7830,315.520
2-20-F-L 37,556.07−4.540,104.231.939,240.95−0.236,423.3−7.436,322.33−7.640,104.231.936,234.48−7.8
3-20-P-S 44,334.2−16.151,515.53−2.544,334.2−16.142,944.78−18.734,107.64−35.453,164.530.5852,565.1−0.5
4-20-P-L 57,350.35−1.757,350.35−1.757,350.35−1.756,603.63−347,402.91−18.759,140.981.3257,286.89−1.8
5-20-R-S 34,090.58−6.530,827.18−15.429,843.58−18.131,741.08−12.931,741.08−12.936,469.52036,469.520
6-20-R-L 42,169.13−3.644,764.072.243,780.720.0441,071.62−6.140,600.21−7.244,707.232.142,873.71−2
Average-20 40,987.99−5.841,928.16−3.640,259.56−7.439,045.86−10.235,944.16−17.444,023.521.1642,624.2−2
7-50-F-S 21,603.83−3.721,603.83−3.721,603.83−3.721,281.88−5.125,123.1511.924,975.2511.224,998.4511.3
8-50-F-L 51,001.04−12.151,001.04−12.150,896.3−12.349,128.45−15.350,061.24−13.753,710.99−7.454,288.08−6.4
9-50-P-S 54,165.39−654,165.39−653,064.14−7.955,643.25−3.455,643.25−3.457,651.71057,651.710
10-50-P-L 119,099.6−10.2119,099.6−10.2116,868.6−11.8100,393.3−24.3106,544.8−19.6126,360−4.7133,743.20.82
11-50-R-S 28,907.33−13.228,907.33−13.228,907.33−13.231,251.34−6.229,969.16−10.133,058.21−0.833,159.12−0.5
12-50-R-L 76,697.65−3.376,697.65−3.376,604.84−3.465,792.21−1765,752.8−17.177,467.34−2.376,517.36−3.5
Average-50 58,579.14−8.358,579.14−8.357,990.84−9.253,915.07−15.655,515.73−13.162,203.92−2.663,392.99−0.8
13-100-F-S 36,188.32−18.435,937.96−1938,323.6−13.633,565.46−24.433,566.06−24.441,812.08−5.841,636.8−6.2
14-100-F-L 63,488.24−22.560,851.27−25.761,351.99−25.152,328.66−36.149,789.52−39.266,375.92−1972,595.41−11.4
15-100-P-S 60,815.37−14.954,190.18−24.260,554.49−15.360,614.78−15.247,498.28−33.569,275.79−3.171,400.63−0.1
16-100-P-L 104,786.4−16.1102,121.7−18.3111,728.2−10.695,836.25−23.3101,142.5−19.1115,579.2−7.5128,473.22.7
17-100-R-S 61,647.98−23.761,962.79−23.361,962.79−23.352,689.7−34.852,689.7−34.877,484.06−4.177,484.06−4.1
18-100-R-L 101,013.6−23.3105,175.3−20.2110,683.8−1695,363.69−27.697,568.94−25.9119,554.3−9.3126,871.5−3.7
Average-100 71,323.32−20.170,039.87−21.574,100.81−1765,066.42−27.163,709.17−28.681,680.23−8.586,410.27−3.2
19-250-F-S 35,443.33−52.948,079.56−36.139,848.03−4736,309.86−51.736,612.19−51.336,309.86−51.736,612.19−51.3
20-250-F-L 67,014.68−50.970,325.74−48.587,575.52−35.851,239.48−62.462,831.21−5451,239.48−62.462,831.21−54
21-250-P-S 77,941.82−31.581,510.78−28.492,124.73−19.173,654.78−35.373,143.68−35.773,654.78−35.373,143.68−35.7
22-250-P-L 135,164.3−32.5133,567.4−33.3161,377.2−19.495,906.32−52.1131,668.8−34.295,906.32−52.1131,668.8−34.2
23-250-R-S 97,231.36−40.7113,547.4−30.795,471.16−41.8107,628.9−34.4101,863.1−37.9107,628.9−34.4101,863.1−37.9
24-250-R-L 187,891.9−31.5180,807.7−34.1180,803−34.1146,270.4−46.7166,523.5−39.3146,270.4−46.7166,523.5−39.3
Average-250 100,114.6−37.7104,639.8−34.9109,533.3−31.885,168.2−4795,440.4−40.685,168.2−4795,440.4−40.6
25-500-F-S 74,692.46−55.251,581.38−69.176,481.48−54.261,718.84−6354,247.37−67.578,432.78−5386,250.21−48.3
26-500-F-L 124,478.4−51.9106,505.3−58.9131,572.4−49.272,388.37−7293,308.66−64112,172−56.7126,203.2−51.3
27-500-P-S 152,566.6−26.2107,797.6−47.8150,645.3−27.1109,238−47.1112,111.9−45.7136,168.2−34.1148,384.5−28.2
28-500-P-L 249,759.7−24.6225,812.4−31.8247,180.6−25.3152,573.5−53.9196,969.2−40.5201,973.8−39225,040.5−32
29-500-R-S 172,397.2−38.8108,998.5−61.3175,267.1−37.7141,743.7−49.6128,570.6−54.3178,693.9−36.5173,003.3−38.5
30-500-R-L 275,488.2−33.9241,454−42.1294,424.7−29.4167,819.5−59.7214,081.3−48.6239,675.7−42.5261,517.2−37.3
Average-500 174,897.1−36.9140,358.2−49.3179,261.9−35.3117,580.3−57.5133,214.8−51.9157,852.7−43170,066.5−38.6
31-1000-F-S 51,188.29−70.539,530.22−77.251,585.38−70.323,266.5−86.625,563.9−85.233,125.19−80.942,197.08−75.7
32-1000-F-L 97,093.51−60.792,142.15−62.797,772.35−60.424,110.85−90.247,620.03−80.757,137.24−76.973,126.9−70.4
33-1000-P-S 293,900.2−36.7236,797.1−49287,774.9−38178,654.3−61.5209,372.1−54.9240,944.9−48.1267,814.2−42.3
34-1000-P-L 536,199.8−24.7492,753.1−30.8534,689.9−24.9346,244.9−51.4381,116.4−46.5397,930−44.1430,894.4−39.5
35-1000-R-S 264,295.2−46.2192,030.8−60.9249,078.6−49.3124,538.1−74.6181,237−63.1212,979.1−56.7232,298.7−52.7
36-1000-R-L 458,126.7−35407,703.7−42.1463,915.7−34.2297,212.8−57.8335,935.6−52.3364,643.1−48.2380,268.9−46
Average-1000 276,004.15−40.7243,492.8−47.7280,802.8−39.7165,671.2−64.4196,807.5−57.7217,793.3−53.2237,766.7−48.9
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alhujaylan, A.I.; Hosny, M.I. A Swarm Intelligence Solution for the Multi-Vehicle Profitable Pickup and Delivery Problem. Algorithms 2024, 17, 331. https://doi.org/10.3390/a17080331

AMA Style

Alhujaylan AI, Hosny MI. A Swarm Intelligence Solution for the Multi-Vehicle Profitable Pickup and Delivery Problem. Algorithms. 2024; 17(8):331. https://doi.org/10.3390/a17080331

Chicago/Turabian Style

Alhujaylan, Abeer I., and Manar I. Hosny. 2024. "A Swarm Intelligence Solution for the Multi-Vehicle Profitable Pickup and Delivery Problem" Algorithms 17, no. 8: 331. https://doi.org/10.3390/a17080331

APA Style

Alhujaylan, A. I., & Hosny, M. I. (2024). A Swarm Intelligence Solution for the Multi-Vehicle Profitable Pickup and Delivery Problem. Algorithms, 17(8), 331. https://doi.org/10.3390/a17080331

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop