Next Article in Journal
Empirical Studies on Merging Characteristics in Temporary Highway Work Zones with Different Traffic States and Layouts
Previous Article in Journal
Niching Global Optimisation: Systematic Literature Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Open Competency Optimization: A Human-Inspired Optimizer for the Dynamic Vehicle-Routing Problem

by
Rim Ben Jelloun
,
Khalid Jebari
*,† and
Abdelaziz El Moujahid
IABL, FSTT, Abdelmalek Essaadi University, Tetouan 93000, Morocco
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Algorithms 2024, 17(10), 449; https://doi.org/10.3390/a17100449
Submission received: 14 August 2024 / Revised: 24 September 2024 / Accepted: 1 October 2024 / Published: 9 October 2024
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)

Abstract

:
The vehicle-routing problem (VRP) is a popular area of research. This popularity springs from its wide application in many real-world problems, such as logistics, network routing, E-commerce, and various other fields. The VRP is simple to formulate, but very difficult to solve and requires a great deal of time. In these cases, researchers use approximate solutions offered by metaheuristics. This work involved the design of a new metaheuristic called Open Competency Optimization (OCO), which was inspired by human behavior during the learning process and based on the competency approach. The aim is the construction of solutions that represent learners’ ideas in the context of an open problem. The candidate solutions in OCO evolve over three steps. Concerning the first step, each learner builds a path of learning (finding the solution to the problem) through self-learning, which depends on their abilities. In the second step, each learner responds positively to the best ideas in their group (the construction of each group is based on the competency of the learners or the neighbor principle). In the last step, the learners interact with the best one in the group and with the leader. For the sake of proving the relevance of the proposed algorithm, OCO was tested in dynamic vehicle-routing problems along with the Generalized Dynamic Benchmark Generator (GDBG).

1. Introduction

In the field of optimization, metaheuristics have become an inevitable issue. A quick review of the literature is enough to see the amazing growth of the number of algorithms. These new tools are obviously an alternative to classic methods as, with these methods, an accurate solution cannot be found or requires a great deal of processing time.
These novel kinds of algorithms help to find a satisfactory solution (not necessarily an accurate solution) in a reduced time, given that their main purpose is to efficiently explore the research space within a suitable and adequate time.
In his famous book “Metaheuristics: From Design to Implementation” [1], El-Ghazali Talbi proposed a classification of metaheuristics according to their solutions. On one hand, there are single solution-based metaheuristics (S-metaheuristics), and on the other hand, there are population-based metaheuristics (P-metaheuristics). In the first category, a single solution evolves iteratively until the best possible solution is found. Examples of S-metaheuristics include Simulated Annealing, Tabu Search, and others. However, in the second category, a population of solutions (the number of solutions is equal to the size of the population) evolves over generations (iterations) until an optimal solution is found. The list of P-metaheuristics is very exhaustive, and it includes genetic algorithms, particle swarm optimization, ant colony, gray wolf optimizer, artificial bee colony, and many others. Another way to classify these metaheuristics is according to their inspiration. They are inspired by the social behaviors of animals or insects. In other cases, they are derived from physical or chemical phenomena. However, human behavior is rare as an inspiration.
This study proposes a new metaheuristic named Open Competency Optimization (OCO), which is based on a specific human learning approach known in the literature as the competency-based approach [2]. To demonstrate the performance of the proposed algorithm, OCO is not applied in a static environment with CEC 2016, 2020, or 2022 [3] or in some benchmarks related to applications in engineering tests, as is common in other research, but in a dynamic optimization problem that is widely used in the literature—the Generalized Dynamic Benchmark Generator (GDBG)—and also in a vital field—dynamic VRP.
The goal of applying OCO in dynamic optimization is twofold. First, real-world applications of optimization problems are often dynamic. Second, metaheuristics require several remedies and adjustments to be applied in dynamic optimization. Indeed, in real life, optimization problems are not always stationary and can change stochastically over time. Optimization algorithms must, therefore, be capable of tracking the optimum as it evolves, meaning that they must follow the shifting optimum over time. Dynamic optimization problems are inherently more challenging than static ones, and metaheuristics have been successfully applied across various domains. Metaheuristics ensure convergence toward the desired optimum, meaning that individuals representing good candidate solutions become concentrated at a specific point.
The main drawback of applying P-metaheuristics in a dynamic environment is that once the algorithm tends to converge around some optima, it begins to lose its ability to continue searching for a new optimum. Therefore, a key point in optimization approaches is the need to increase or maintain the diversity of the population such that the algorithm retains its ability to explore the search space, even when the population has partially converged to an optimal solution.
Various methods and techniques have been developed to address the challenges raised by dynamic environments. We will review the most important approaches, which allow us to improve P-metaheuristics in dynamic environments.
In the literature, several improvements have been investigated:
  • The use of implicit or explicit memory [4];
  • The use of multiple populations [5];
  • The anticipation (prediction) of environmental changes [6];
  • The maintenance of diversity through several strategies, such as random immigrants, hypermutation, adapting parameters, new operators, or niche techniques [7,8].
To solve the dynamic optimization problem, OCO reacts in a very simple way. On one hand, the size of the solution population is not static but, rather, dynamic. At each change in environment, the size increases or decreases. On the other hand, new solutions are inserted into the population to maintain its diversity.
The main contributions of this study are as follows:
  • We developed a novel metaheuristic inspired by the behavior of students in competency-based learning.
  • Recognizing that metaheuristics tend to lose diversity in dynamic environments, we introduced a mechanism for injecting new solutions into the population. Consequently, the size of the solution population is dynamic rather than static.
  • We applied our new metaheuristic to two dynamic optimization problems; the first is the Generalized Dynamic Benchmark Generator, and the second is the dynamic vehicle-routing problem.
  • We tested the diversity of our metaheuristic using a novel measurement technique.
This article is organized as follows. The next section presents the dynamic VRP, which was the second test chosen in this study to show the effectiveness of OCO. The third section concentrates on the educational inspiration of the new metaheuristic, as well as the three stages on which OCO is based. As described in the second part of the section on the experimental studies, to evaluate the performance of OCO, four sets of experiments based on the benchmark proposed by Li et al. [9] were carried out. The objective of the first experiment is to investigate the working mechanism of OCO, analyze the advantages of using different remedies, and study the diversity of OCO. In the second set of experiments, the performance of OCO was compared with that of three well-known genetic algorithms that are commonly used in the literature. The aim of the third experiment was to study the mechanism of diversity with a new diversity measure. The conclusion of this work is given in the fifth section.

2. Dynamic VRP

In this section, we will discuss the second test chosen in this study (dynamic VRP) to demonstrate the effectiveness of the OCO algorithm.
The vehicle-routing problem can be described as a process of distributing a quantity of a product or a service in good condition and in a suitable amount of time.
The vehicle-routing problem (VRP) is a generalization of the traveling salesman problem with several travelers, who will be called vehicles. The goal is to visit all customers using a fleet of vehicles that leave and all arrive at the depot (Figure 1).
Let V = { v 0 , v 1 , , v n } correspond to n customers, where v 0 represents the depot. Each customer i in V { 0 } presents an order for a product. d i corresponds to the quantity of delivered products. A fleet of M vehicles ( 1 , 2 , , M ) is available in identical capacity Q. Similarly, for all customers, c i j is the direct transport cost between customer i and j. c i j is proportional to the travel distance. The objective of the VRP is to find M tours (departing and returning to the depot) so that all customers are visited once while minimizing the total cost of transport and respecting the storage capacity of vehicles. As the VRP generalizes the famous traveling salesman problem, it is an NP-Hard problem. Fisher and Jaikumar [10] gave the following formulation of this problem: there is a binary variable x k i j where i denotes the customer i, j designates the customer j, and
X j i k = 1 , if j is visited after i with vehicle k 0 , else
There is another binary variable Y i k where i designates the customer i and k designates the vehicle
Y i k = 1 , if k vehicle visits customer k 0 , else
The goal is to minimize the following function:
z = k = 1 M i = 0 n j = 0 n d i j X j i k
with the following constraints:
k { 1 , , n } , i = 1 n d i Y i k Q
i { 1 , , n } , k = 1 M Y 1 k = M
i { 1 , , n } , k = 1 M Y i k = 1
  • Constraint 4 requires that the loading capacity of each vehicle is respected;
  • Constraint 5 requires a unique passage for each customer;
  • Constraint 6 allows one to check the construction of M tours.
Several metaheuristics are used to solve the vehicle-routing problem. They include simulated annealing [11], tabu search [12,13], the ant colony algorithm [14,15], and genetic algorithms [16,17]. A detailed study of several approaches to solving the VRP can be found in [18,19,20,21] (see Table 1).
However, many real problems are not static but have a dynamic character. A new customer can occur at the last minute, a road can become impassable, road congestion can occur, or a vehicle can break down. In these cases, vehicle planning must be changed in response to these changes that may arise over time [23,24,25,26]. Figure 2 presents a silhouette to mark this dynamism. Several metaheuristics have been presented in the literature to solve the the dynamic VRP (see Table 2). To mark the dynamic character of the problem, Mavrovouniotis et al. [27] proposed a memetic ant colony optimization algorithm with an adaptation for the evaporation of pheromones.
Likewise, in another work based on the ant colony system, Mavrovouniotis et al. also gave a panoply of ideas with the insertion of random immigrants, elitism-based immigrants, or memory-based immigrants [28,29]. Housroum et al. proposed a genetic algorithm with PMX crossover and swaps or 2-opt mutations [30]. The Evolutionary Hyper-Heuristics proposed by Garrido et al. evolved and utilized constructive, perturbation, and noise heuristics to solve the dynamic VRP [31]. To maintain the diversity of the population, Khouadjia et al. investigated the services of multiple populations in order to update their new Multi-Adaptive Particle Swarm Optimization method [32].
In recent years, a number of innovative metaheuristic strategies have been developed and applied to the DVRP, demonstrating substantial improvements in efficiency and solution quality. Among these, hybrid metaheuristics have gained considerable attention for their ability to combine the strengths of multiple algorithms. For example, recent studies have explored the integration of genetic algorithms (GAs) with elastic strategies, creating hybrid models that leverage the global search capabilities of GAs with the precision of local optimization methods [33]. This approach has shown promise in rapidly adapting to dynamic changes, offering superior performance in highly variable environments.
Another promising development in the field is the application of reinforcement learning (RL) in conjunction with traditional metaheuristics. RL-based methods enable algorithms to learn and improve their decision-making processes over time, making them highly effective in dynamic scenarios where conditions are continuously changing. For instance, the recent work by Achamrah et al. [34] demonstrated the integration of RL with genetic algorithms (GAs) to tackle the DVRP, yielding significant improvements in route optimization and computational efficiency. The RL component allows the algorithm to learn from past experiences, enhancing its ability to anticipate and react to future changes.
The rise of swarm intelligence techniques has also contributed to advancements in solving the DVRP. Algorithms inspired by the collective behavior of social organisms, such as ant colony optimization (ACO) and bee colony optimization (BCO), have been refined and adapted for the DVRP. ACO, in particular, has been enhanced with mechanisms for better handling real-time data, allowing it to dynamically update routes as new information becomes available. The pheromone-based communication in ACO ensures that the system remains robust and responsive to changes, making it an ideal candidate for dynamic optimization problems [35].
The continued evolution of metaheuristic approaches for the DVRP is critical as logistics systems become more complex and data-driven. The integration of different metaheuristics, the incorporation of learning mechanisms, and the application of predictive analytics are all indicative of a broader trend towards more intelligent and adaptive optimization frameworks. As the landscape of logistics continues to evolve, these advanced metaheuristic techniques will play an increasingly important role in ensuring efficient and responsive transportation networks.
Table 2. Some recent research related to the Dynamic VRP.
Table 2. Some recent research related to the Dynamic VRP.
ResearchersYearTitle
Mavrovouniotis et al. [28]2012Ant colony optimization with immigrants schemes for the dynamic vehicle-routing problem
Mavrovouniotis et al. [29]2012Ant colony optimization with memory-based immigrants for the dynamic vehicle-routing problem
Xiang et al. [35]2021A pairwise proximity learning-based ant colony algorithm for dynamic vehicle-routing problem
Housroum et al. [30]2006A hybrid GA approach for solving the dynamic vehicle-routing problem with time windows
Jianxia et al. [33]2023Elastic Strategy-Based Adaptive Genetic Algorithm for Solving Dynamic Vehicle-Routing Problem With Time Windows
Garrido et al. [15]2010DVRP: a hard dynamic combinatorial optimisation problem tackled by an evolutionary hyper-heuristic
Khouadjia et al. [16]2010Multi-swarm optimization for dynamic combinatorial problems: A case study on dynamic vehicle-routing problem
Achamrah et al. [17]2021Solving inventory routing with transshipment and substitution under dynamic and stochastic demands using genetic algorithm and deep reinforcement learning

3. Open Competency Optimization

The learning environment cannot be favorable for the creation of new ideas and resolution of complex problems unless there is respect for the competencies of different stakeholders [36]. This is why, in a competency-based education system, an individual’s learning evolves according to the capacities of that learner in a dynamic environment instead of passing through a predetermined program dictated by a static schedule and penalized by time requirements. According to Rogiers [36], the competency-based approach is based on the learner. Indeed, a student becomes an active learner who must propose ideas (self-learning), organize their work through interactions with neighbors (neighbor learners), and seek new information guided by leadership (leadership interaction). Thus, the impact of the competency-based approach is very important and exceptional in the field of training and education.
A competency-based education system favors and pushes the learner to develop their knowledge while respecting their own abilities [2]. Indeed, the student becomes a pivot—in other words, the center of interest—who must plan their educational path by suggesting ideas (self-learning), interacting positively with neighbors (neighbor learners), and taking advantage of and exploiting new information proposed by leadership (leadership interaction). This is why this algorithm was inspired by this educational approach, which allows learners to seek solutions at their rhythm, be helped by their neighbors, and benefit from the ideas of the best.
The Open Competency Optimization (OCO) algorithm was designed to maintain a balance between exploration and exploitation. Indeed, learning via competency allows the learner to build their own learning path according to their abilities (updating the candidate solutions through self-learning) and interact with other learners (updating candidate solutions through neighbor learners and leadership interactions). Another advantage of this type of inspiration is having a population of ideas to solve a problem and not a population of individuals, as is the case with other algorithms inspired by the behavior of insects or animals. In the proposed algorithm, the population size is not static but, rather, dynamic, as learners react to the ideas of different learners. Each learner can have one or several ideas based on their capabilities and the reactions of others to their proposals. The purpose is as follows:
  • Each learner can build their learning path according to their capacities (self-learning);
  • Each learner can react to their closest group from either positions or capacities (neighbor learners). It should be noted that this group cannot exceed five members [37];
  • Learners can respond by discussing or adopting some smart proposals (better capabilities) from other learners (leadership interactions).
However, an algorithm can be easily trapped in ideas of optimal local space when solving a complex problem that contains several local optimal solutions. To avoid premature convergence and balance the exploration and exploitation capabilities, the proposed algorithm introduces several remedies. On one hand, the average component or the center of gravity makes it possible to extend the research area of the Open Competency Optimization (OCO) algorithm. On the other hand, learners can mutate certain research ideas by changing the conception of these ideas (changing a component of a vector while keeping the others unchanged in self-learning updates). A chart (Figure 3) and Algorithm 1 summarize the proposed OCO algorithm.
Algorithm 1 General steps of the OCO Algorithm
 1:
Initialize N learners (populations of solutions); t = 0 ;
 2:
while  t < MaxIteration  do
 3:
    Evaluate each learner with the fitness function;
 4:
    if capacity > ThresholdCapacity then
 5:
        Update learners through self-learning;
 6:
    end if
 7:
    Update learners through neighbor learner groups;
 8:
    Update learners through leadership interactions;
 9:
     t = t + 1 ;
10:
end while
11:
Return the best solution(s);

3.1. Self-Learning

Each learner can build their own learning by assimilating the concepts of the prerequisites and rephrasing them again by virtue of the principle of challenging new facts imposed by a problem situation. The strength of the competency-based approach is its ability to provoke and push learners to build new knowledge by asking questions about old knowledge, modifying and adapting it to create new knowledge, and comparing that knowledge to old knowledge. This can be formulated as follows for a learner
  • X i ( x i 1 , x i 2 , …, x i D ):
x ir new = x ir old + randr · ( x ir old x i k ) , or x ir old + randr · ( x i l + x i k ) , or x ir old + randr · ( x i l + x i k ) , or x i k , iff ( X i new ) < f ( X oi old ) x ir old , else
where r, k, l, and m are indexes selected in a random way from 1 to D, and D is the dimension of the learning vector (proposed solution of the problem), and randr is a random number between 0 and 1. However, some learners cannot plan their learning path alone. This constraint depends on the abilities of each learner. This prompts us to choose a threshold of capabilities (ThresholdCapacity = 0.8). In this study, on one hand, we adopt the restriction of self-learning: condition capacity > ThresholdCapacity. On the other hand, the population size is dynamic due to the fact that the algorithm takes the ideas of the learners and not the numbers of the learners into account. Therefore, each learner can propose one or more ideas with the restriction that this number cannot exceed 5/4 of the initial size. Equation (7) offers four strategies for adapting solutions and better exploring the search space; the algorithm can, thus, insert one or two solutions into the population of learners, depending on their adaptation (IdeaAdaptation). Algorithm 2 summarizes the conditions.
Algorithm 2 Self-learning conditions
 1:
c a p a c i t y = rand ( 0 , 1 )
 2:
I d e a A d a p t a t i o n randomly chosen [ 0 , 1 ]
 3:
if  c a p a c i t y > ThresholdCapacity   then
 4:
    Update through self-learning
 5:
    if  NewPopulationSize < 5 4 × PopulationSize  then
 6:
        if  I d e a A d a p t a t i o n < 0.05  then
 7:
           Insert 1 solution;
 8:
            NewPopulationSize NewPopulationSize + 1
 9:
        else
10:
           if  ThresholdCapacity = 0.1  then
11:
               Insert 2 solutions;
12:
                NewPopulationSize NewPopulationSize + 2
13:
           end if
14:
        end if
15:
    end if
16:
end if

3.2. Neighbor Learner Groups

A learner, in their quest for learning, is influenced by the group around them. This influence not only depends on their position but also on the group of learners who have similar capabilities. This group cannot exceed five learners. In addition, learners in a round table (RT) around a group are periodically modified after a certain number of generations (period of attraction within a group), such that the exchange of information covers all learners to achieve better exploration capabilities and to look for other promising areas in the search space. This round table respects the following formula:
R T = | α · X g X |
X new = X g β · R T
The value g represents the number of learners in the group. This number is randomly selected in the set 2, …, 5, and  α , β are two learning attraction coefficients that are calculated as follows:
β = r 1 × 2.5 exp ( t 2 2 ( LIter / 3 ) 2 ) × 2 π
α = r 2 t LIter
where r1 and r2 are randomly selected in the range [0; 1], and the coefficients α and β are linearly decreasing over generations (see Figure 4, but  β follows a Gaussian distribution while remaining in the learning attraction group, while LIter is the maximum number of iterations.
The previously selected variables allow the learner to react in their learning group circle.
A learner can develop their competency by reacting not according to their neighborhood, but in attraction to other learners who have the same abilities. Therefore, the algorithm exploits the neighborhood to the fullest. The pseudocode of Algorithm 3 was changed at the instruction level to allow a learning interaction at the nearest neighbor group level for X i ( X i + 1 , X i + 2 ; X i + g ) but with a group whose number of members cannot exceed five randomly chosen learners who have the same capabilities. The pseudo-code in Algorithm 4 summarizes these changes.
Algorithm 3 Strategy for the learner group in a close neighborhood
 1:
g = rand ( 2 , 5 ) /* random number { 2 , , 5 }
 2:
R 0
 3:
for  j 1 to g do
 4:
     R T j = | α i · X i + j X i |
 5:
     R = R + X i + j β j · R T j
 6:
end for
 7:
R = R / g
 8:
if  f ( R ) < f ( X i )  then
 9:
     X i = R
10:
else
11:
     X i = X i
12:
end if
Another strategy was also adopted in this study (Algorithm 4).
Algorithm 4 Strategy of the chosen randomly learner group
 1:
g = rand ( 1 , 5 ) /* random number { 1 , 2 , , 5 } */
 2:
R 0
 3:
for  j 1 to g do
 4:
     K = rand ( 1 , LN ) /* random number { 1 , 2 , , LN } */ ▹ LN Number of learners
 5:
     R T j = | α i · X K X i |
 6:
     R = R + X i + j β j · D j
 7:
end for
 8:
R = R / g
 9:
if  f ( R ) < f ( X i )  then
10:
     X i = R
11:
else
12:
     X i = X i
13:
end if

3.3. Leadership Interaction

In the context of competency-based learning, a learner is influenced by the ideas of the best learner and reacts positively to leadership. The latter is not necessarily the best, but is a learner who is closer to all learners (the competency is equal to the average of the whole group). This two-way interaction improves and builds the capabilities of each learner. The impact of these interactions with the best and the average of the population is the exploration of promising search spaces and their exploitation. A detailed description of this dynamic group strategy is given in the following:
  • Every learner adopts or reacts to the idea of the best. Either the best solution is kept or other spaces for the best solution are exploited.
  • The competency-based approach is not captured by the concept of the middle class. In traditional education, learners are guided by average learners. The best or the weakest are beyond the reach of such teaching. All learning of this approach promotes the means and neglects others. To avoid this situation, each learner interacts with the average according to their capabilities because they develop their own competencies. This reinforces the exploratory nature of the algorithm and, thus, helps improve the diversity of ideas (solutions to the problem).
The formula of this interaction in the concept of competency learning can be formulated for a learner X i using Formula (12):
x i new = C i · x i old + ε i · ( X best λ · X mean )
where λ is a learning coefficient that belongs to the set { 1 , 2 } , X mean is the average value of all learners, X best is the value of the best learner, C i is the learner’s capability for X i in Formula (13), and r1 is a random number (r1 ∈ [0,1]).
C i = 1 r 1 + exp f i max ( f i ) Iter
ϵ i is a modulation coefficient belonging to [ 0 , 1 ] , as shown in Formula (14).
ϵ i = 1 r 1 + exp f i max ( f i ) × Iter
f i is the competency of the learner X i (the fitness function), and max ( f i ) is the best capability (best value of the fitness function) over the learning cycle. The parameters ϵ i and C i make it possible to control the diversity and improve the speed of convergence.

3.4. Optimization Analysis

During the evolution of the algorithm, the learners gravitate towards the optimal solution, which could be the improvement of skills. The exploration phase occurs when the learners step out of their traditional learning environment to discover other realms that allow them to develop their skills. Similarly, learners explore other realms by exchanging ideas with their work group. Once the groups are harmonized, they can exploit their skills to find the best solution. Learners can interact with group leadership (centroid) and with the best performers to better exploit their paths to skill development.
As indicated in the algorithm, OCO starts by randomly generating ideas (solutions) within the boundary of the open problem. The equations aim to move the initial solutions of OCO within the search space in search of the optimal solution. Competence is evaluated using the fitness function, and the most fit learner is the one whose idea returns the optimal value of the fitness function. The optimization processes are repeated several times until a stopping criterion is met. This criterion can be a maximum number of iterations or another, depending on the nature of the problem.

3.4.1. Exploration

At the beginning of the evolution process, the area containing the best solution has a higher chance of being reached due to OCO’s global exploration capabilities. OCO adopts two strategies to achieve this. The first is the self-learning strategy, which optimally explores the search space using the formulas in Equation (7). The second strategy is the neighbor learner group strategy. At the start of evolution, the OCO algorithm explores the search space outside the round table of ideas. As iterations progress, the system exploits already-found solutions. This exploration capability of OCO helps avoid becoming trapped in a local optimum and allows the algorithm to expand into other search spaces.

3.4.2. Exploitation

The algorithm exploits the search space in the neighbor learner group strategy phase. Since there are no new ideas to explore, it refines existing ones (exploitation of the search space as defined by Formulas (8) and (9). Meanwhile, the local exploitation capability aids in finding a better solution in the vicinity of the best solutions through the leadership interaction strategy. This exploitation capability has a crucial influence on generating optimal solutions in successive idea generations. To achieve effective and precise search results, the balance between exploration and exploitation is crucial, and this balance is well-established in OCO.

3.4.3. Computational Complexity

The algorithmic complexity of the submitted program is indicated by Big-O notation. It describes how the computational time and memory usage increase with the size of the input data and the processing involved. This complexity is assessed in terms of the temporal and spatial resources required to execute the algorithm.
Let N denote the size of the candidate solution population, d is the dimension of the fitness function, and Max is the maximum number of iterations.
O ( O C O ) = O ( ( ( N × F i t n e s s _ E v a l u a t i o n × M a x E v o l v i n g ) + ( N × d × M a x E v o l v i n g ) )

4. Experimental Studies

The performance of OCO was tested using six problems generated by the benchmark proposed by Li et al. [9]. There were seven types of changes in the system control parameters in the benchmark test. They were small step changes, large step changes, random changes, chaotic changes, recurrent changes, recurrent changes with noise, and dimensional changes. The framework of the seven change types is described as follows:
  • Small steps
    Δ ϕ = α · | | ϕ | | · r · ϕ severity
  • Large steps
    Δ ϕ = | | ϕ | | · α · sign ( r ) + r · ( α max α ) · ϕ severity
  • Random
    Δ ϕ = N ( 0 , 1 ) · ϕ severity
  • Chaotic
    Δ ( t + 1 ) = A · ( ϕ ( t ) ϕ min ) · ( ( 1 ( ϕ ( t ) ϕ min ) ) · 1 | | ϕ | | )
  • Recurrent
    Δ ( t + 1 ) = ϕ min + | | ϕ | | · sin 2 π t + t + 1 2
  • Recurrent with noise
    Δ ( t + 1 ) = ϕ min + | | ϕ | | · sin 2 π t + t + 1 2 + N ( 0 , 1 ) · noisyseverity
  • Dimensional
    D ( t + 1 ) = D ( t ) + sign × Δ D
    where | | ϕ | | is the change range of ϕ , ϕ severity is a constant number that indicates the change severity of ϕ , and ϕ min is the minimum value of ϕ . a [ 0 , 1 ] and α max a ( 0 , 1 ) are constant values, which are set to 0.04 and 0.1 in the GDBG system. θ is the initial phase, noisyseverity a ( 0 , 1 ) , and r is a random number in (−1, 1), sign ( x ) returns 1 when x is greater than 0 and returns −1 when x is less than 0; otherwise, it returns 0. Δ D is equal to 1.
  • Small steps
    D ( f ) = max ( D ) sign = 1
  • Large steps
    D ( f ) = min ( D ) sign = 1
    where Max(D) and Min(D) are the maximum and minimum numbers of dimensions. The six test problems defined in [8] are the following: F1: rotation peak function, F2: composition of spheres function, F3: composition of Rastrigins function, F4: composition of Griewanks function, F5: composition of Ackleys function, and F6: hybrid composition function. The parameters of the six problems are set in the same way as in [8].

4.1. OCO’s Mechanism of Diversity

Our first experiment was to investigate the OCO’s mechanism of diversity. The second experiment was devoted to the performance of OCO in comparison with that of SGA and GADNS. In the experiments, the standard genetic algorithm (SGA) parameters were as follows: simulated binary crossover [38], with a crossover rate of 0.8. For the mutation, we considered polynomial mutation [39], with a mutation rate of 0.2. We used a tournament selection with a size of 4. The population size was N = 100. In this study, the concept of new ideas that emerged in the learning process (insertion of immigrants) was used and applied. However, this concept was implicitly present in the proposed algorithm. For this, the ThresholdCapacity parameter value of Algorithm 2 needed to change and became ThresholdCapacity = 0.1. In order to understand the effect of remedies used by OCO on the population diversity, we recorded the diversity of the population at every generation for each run of OCO on dynamic optimization problems (DOPs). The mean population diversity of OCO on DOPs at the generations over 30 runs was calculated according to the following formula:
Div ( t ) = 1 30 k = 1 30 1 ln ( n 1 ) i = 1 n j i n d i , j ( k , t )
where D i , j ( k , t ) is Euclidean distance between the ith and jth individuals at generation t of the k t h run. The diversity dynamics over a generation for SGA, OCO, random-immigrant genetic algorithm (RIGA) [40] with a total number of immigrants of Ni = 30, and hypermutation genetic algorithm (HMGA) [41] on DOPs are shown in Figure 5. It can be seen that OCO maintained the highest diversity level in the population, while SGA maintained the lowest diversity level. This interesting result shows that OCO, by controlling parameters and using new genetic operators, maintained a high diversity level in dynamic environments.

4.2. Comparison with Other Metaheuristics

In the second set of experiments, the GDBG system was used, and the parameters of the test problems were set in the same way as in [9]. OCO was compared with SGA, the random-immigrant genetic algorithm (RIGA) [40] with a total number of immigrants of Ni = 30, and the hypermutation genetic algorithm (HMGA) [41]. It was also interesting to compare OCO with two recent methods published in the CEC 2009 competition. The first one was the clustering particle swarm optimizer (CPSO) [42], while the second was the differential ant-stigmergy algorithm (DASA) [43]. For the third experiment, the parameters of the SGA, HMGA, and RIGA were as follows: simulated binary crossover [38] with a crossover rate of 0.85. For the mutation, we considered Gaussian mutation with a mutation rate of 0.15. We used a tournament selection with a size of 4. The population size was N = 100. Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 present the results of the best average values, mean average values, worst average values, and standard error (STD). The results allowed us to compare the quality of the solutions found by each technique. Table 3 shows that the performance of the four methods seemed to be almost similar, with a marked improvement in OCO, as F1 was the simplest to optimize. However, when the number of peaks was equal to 50, OCO outperformed the other techniques. Table 4 presents the results for a more complicated function, which had multiple optima and a dimension of 10. The SGA technique lost diversity due to the convergence of the algorithm. RIGA lost the quality of the solution due to the random individuals included in the population, and the diversity was much lower than that of OCO. Table 5 shows that for the function F3 that had multiple optima, the quality of the solution of OCO was much better, but it was interesting that, in this table, the diversity of the population in the case of OCO was much higher than that of the other techniques. Table 6 shows the performance of each method for a complicated multi-modal function with a huge number of local optima. OCO used the dynamic population size and then assigned each new individual in the three steps of the proposed metaheuristic. So, OCO did not fall into the trap of local optima, and it was able to find the global optimum. Therefore, the other techniques (SGA, HMGA, RIGA) had a problem finding the global optimum and dove into the local optima, so the error values were high. The results for function F5 are presented in Table 7. The performance of OCO was much better than that of SGA, HMGA, and RIGA. Table 8 presents the results of the composition of complicated functions especially Weierstrass’s function, which is continuous at all points in the search space but is not differential at any point. In this complicated function, OCO presented the best results compared with the other techniques. By observing the results in Table 9, it can be seen that the challenges of different change types were quite different. Looking at the difficulty of the problems starting from the simplest, the small step, and continuing to the most complicated test to optimize, SGA, RIGA, and RIGA have difficulties in optimizing the problems, especially chaotic changes and dimensional changes. However, in all tests, OCO performed much better than SGA, RIGA, and HMGA. It is interesting to take note that the overall performance of OCO was equal to 68.75, while that of CPSO was only equal to 57.57. Therefore, OCO outperformed CPSO because OCO used an equilibrium between exploitation and exploration. The performance of DASA, which was based on ant colonies, was much lower than that of OCO.

4.3. Dynamic Vehicle-Routing Problem

In this dynamic version of vehicle routing, customer requests are not known, but they change dynamically during the distribution process. Consequently, the routes must be re-planned in order to take these new demands into account. Another variant is the cost (distance traveled to deliver a product) of distribution, which can change depending on the delivery period (during peak hours, other routes may be considered to avoid traffic jams). To measure the performance of the Open Competency Optimization algorithm, the measurement of offline performance was used [44]. This measurement was defined as the sum of the best solutions found by the OCO algorithm after detecting dynamic changes following each iteration.
P off = 1 N i = 1 N 1 R j = 1 R P i j
Another factor added to this experimental study was the diversity factor defined by Equation (27), as follows:
F DIV = 1 N i = 1 N 1 R j = 1 R DIV i j
D I V i j is defined by Equation (27), as follows:
DIV i j = 1 N ( N 1 ) p = 1 N 1 R q p N 1 N C p q n + avg ( N V p ; N V q )
N v p and N v q represent the numbers of vehicles for cities p and q, respectively; N is the problem size, and N c p q represents the number of common routes between cities p and q. The test was performed on a real-life problem proposed by Fisher, which is known as the benchmark in dataset F [45]. The OCO algorithm was tested on instances of dynamic vehicle routing from F-N45-k4, F-N72-k4, and F-N135-K7 [45]. These three bodies represent deliveries of groceries, tires, and accessories for service stations. The optimal values of these three instances were 724, 237, and 1162, respectively [46]. For each test mentioned above, N represents the number of customers (including the depot), while k is the number of vehicles.
In this study, the solution was represented as a sequence of N customers to be served, which determined the relative allocation of K vehicles. We checked the capacity constraint of each vehicle starting from the first element of the vector that represented the solution; if it did not violate the constraint, we moved to the next element. If it violated the constraint in a vector element of the solution, we considered using another vehicle starting from that element and repeated the process until all customers were served. To simplify the problem, the capacity of the vehicles was the same, and the maximum distance that each vehicle could travel was equal.
For example, if there were nine customers and a randomly generated solution S 1 was
1 2 3 4 6 5 9 8 7,
this could be interpreted as r=3 possible routes:
0–1–2–3–0, 0–4–6–5–0, and 0–9–8–7–0.
If the routes were greater than or equal to 3, then this solution was legal; otherwise, it was illegal.
The application of our OCO metaheuristic for this representation is shown in three steps of an algorithm. The first one, self-learning, can be considered a mutation of the solution vector. In this study, we chose four techniques for the movement of the solution: inversion, exchange, displacement, and the famous 2-opt operator technique.
Inversion: 1 2 3 4 6 5 9 8 7 ——> 1 3 2 4 6 5 9 8 7
Exchange: 1 2 3 4 6 5 9 8 7 ——> 1 2 8 4 6 5 9 3 7
Displacement: 1 2 3 4 6 5 9 8 7 ——> 1 7 2 3 4 6 5 9 8
Through applying Formula (7), we chose the best solution. In the second step, neighbor learning groups, for a solution S 1 of the population of learners, we chose a group of g learner (solution) neighbors. We applied Algorithm 3 or Algorithm 4, which included multiplication of the solutions, addition, and division by an integer. If the solution found in this step was illegal, we applied a technique that consisted of eliminating the elements of the solution that were repeated and replacing them with the elements that were not in the solution by putting them in the order of indices of the best solution.
For example, S 1 was
1 2 2 3 4 5 6 3 7,
i.e., the best solution of g in Algorithm 3 or Algorithm 4 with S b :
7 9 8 1 2 3 4 5 6,
so the elements that did not exist in S 1 were 8 and 9. We put them in the order of their appearance in the best solution S b , i.e., 9 8.
Hence, the solution S 1 became
1 2 9 3 4 5 6 8 7.
We applied the same technique for the leadership interaction step.
Using the dynamic permutation problem generator [47], we chose a random change and another cyclic change. The frequency of dynamic changes took two values— f = 5 and f = 100 , indicating a strong and weak change, respectively. The degree of change m took the values of 0.1, 0.25, 0.5, and 0.75, indicating a change in low, medium, dense, and very dense traffic.
Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16, Table 17, Table 18, Table 19, Table 20, Table 21 and Table 22 give the results of this comparison, indicating the offline performance of OCO with the insertion of new ideas (solutions), that of the genetic algorithm with hypermutation (HMGA) [41] while using partially mapped crossover, which is the most commonly used crossover operator in the VRP, and that of ant colony optimization with the insertion of immigrants [48].
The Open Competency Optimization algorithm with the insertion of new ideas (ThresholdCapacity = 0.1) presented results that were clearly superior to those of the ACO algorithm with the insertion of random immigrants for the environment changes (from random to cyclical) and also HMGA. This showed that our algorithm maintained a balance between exploitation and exploration even in environments with a strong variation ( f = 100 and m = 0.75 ). This was due to the three learning phases of the algorithm. These three phases made it possible to diversify the population of solutions while making the best use of the research space.

5. Conclusions

We built a metaheuristic based on human social–educational behavior. The proposed algorithm was inspired by the competency-based approach, and the open problem technique was used to shape learners’ learning. The three learning phases of self-learning, group learning, and leadership interaction made it possible to solve multimodal and unimodal functions with different dimensions. The algorithm gave encouraging results for dynamic logistical problems with improvements through adding new ideas (random immigrants). Based on our analysis and experiences, we showed that the skills and competency strategies made it possible to use local information more effectively and generate higher-quality solutions. Comparing the results of our metaheuristic with those of other algorithms on the chosen test problems, we showed that the algorithm significantly improved the results.
The experimental results showed that our model achieved a balance in exploration and presented good performance in comparison with algorithms that used the adaptation of exploitation and exploration parameters separately. In addition, in terms of diversity, OCO had a dynamic population size that could vary according to the specificity of the environment studied, and new solutions (immigrant solutions) were integrated into the population.
Future work will include research on the adaptive control of topological structures to make the algorithm more efficient and the use of other immigration techniques; namely, elitism-based immigrants and memory-based immigrants. In addition, the algorithm can be applied to other optimization problems, such as constraints, noisy problems, and multi-objective optimization.
It is also worth mentioning that the continuous evolution of metaheuristic approaches to solving the dynamic vehicle-routing problem reflects the increasing complexity and dynamism of modern logistical networks. As research in this area continues to advance, we will consider the combination of deep learning and reinforcement learning techniques with the proposed metaheuristic in future work. Predicting changes in a dynamic environment will allow us to better track solutions with OCO. This proposed refinement and the continuous innovation of machine learning techniques will allow us to have more efficient and sustainable routing solutions in an ever-changing world.

Author Contributions

The contribution of each author can be summarized as follows: Conceptualization, K.J.; methodology, K.J.; software, R.B.J. and A.E.M.; validation, K.J., R.B.J. and A.E.M.; formal analysis, K.J., R.B.J. and A.E.M.; resources, R.B.J. and A.E.M.; writing—original draft preparation, K.J. and R.B.J.; writing—review and editing, R.B.J. and A.E.M.; supervision, K.J.; project administration, K.J.; funding acquisition, R.B.J. and A.E.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Talbi, E. Metaheuristics: From Design to Implementation; John Wiley & Sons: Hoboken, NJ, USA, 2009; Volume 2, pp. 268–308. [Google Scholar]
  2. Roegiers, X. L’école et L’évaluation: Des Situations Pour Évaluer les Compétences des Élèves; De Boeck: Brussels, Belgium, 2004. [Google Scholar]
  3. Suganthan, P.N. Benchmarks for Evaluation of Evolutionary Algorithms. 2016. Available online: https://www3.ntu.edu.sg/home/epnsugan/index_files/cec-benchmarking.htm (accessed on 10 July 2016).
  4. Wang, H.; Yang, S.; Ip, W.; Wang, D. Adaptive primal–dual genetic algorithms in dynamic environments. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2009, 39, 1348–1361. [Google Scholar] [CrossRef] [PubMed]
  5. Kordestani, J.K.; Ranginkaman, A.E.; Meybodi, M.R.; Novoa-Hernández, P. A novel framework for improving multi-population algorithms for dynamic optimization problems: A scheduling approach. Swarm Evol. Comput. 2019, 44, 788–805. [Google Scholar] [CrossRef]
  6. Yazdani, D.; Cheng, R.; Yazdani, D.; Branke, J.; Jin, Y.; Yao, X. A survey of evolutionary continuous dynamic optimization over two decades—Part B. IEEE Trans. Evol. Comput. 2021, 25, 630–650. [Google Scholar] [CrossRef]
  7. Jebari, K.; Bouroumi, A.; Ettouhami, A. Parameters control in GAs for dynamic optimization. Int. J. Comput. Intell. Syst. 2013, 6, 47–63. [Google Scholar] [CrossRef]
  8. Jebari, K.; Nasry, B.; Bouroumi, A.; Ettouhami, A. Evolutionary fuzzy rules for dynamic optimisation. Int. J. Innov. Comput. Appl. 2017, 8, 81–101. [Google Scholar] [CrossRef]
  9. Li, C.; Yang, S. A generalized approach to construct benchmark problems for dynamic optimization. In Proceedings of the Simulated Evolution and Learning: 7th International Conference, SEAL 2008, Melbourne, Australia, 7–10 December 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 391–400. [Google Scholar]
  10. Toth, P.; Vigo, D. The Vehicle Routing Problem; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
  11. İlhan, İ. An improved simulated annealing algorithm with crossover operator for capacitated vehicle routing problem. Swarm Evol. Comput. 2021, 64, 100911. [Google Scholar] [CrossRef]
  12. Cordeau, J.F.; Laporte, G. A tabu search heuristic for the static multi-vehicle dial-a-ride problem. Transp. Res. Part B Methodol. 2003, 37, 579–594. [Google Scholar] [CrossRef]
  13. Wang, J.; Jagannathan, A.K.R.; Zuo, X.; Murray, C.C. Two-layer simulated annealing and tabu search heuristics for a vehicle routing problem with cross docks and split deliveries. Comput. Ind. Eng. 2017, 112, 84–98. [Google Scholar] [CrossRef]
  14. Abdulkader, M.M.; Gajpal, Y.; ElMekkawy, T.Y. Hybridized ant colony algorithm for the multi compartment vehicle routing problem. Appl. Soft Comput. 2015, 37, 196–203. [Google Scholar] [CrossRef]
  15. Wang, X.; Choi, T.M.; Liu, H.; Yue, X. Novel ant colony optimization methods for simplifying solution construction in vehicle routing problems. IEEE Trans. Intell. Transp. Syst. 2016, 17, 3132–3141. [Google Scholar] [CrossRef]
  16. Liu, R.; Jiang, Z.; Geng, N. A hybrid genetic algorithm for the multi-depot open vehicle routing problem. OR Spectr. 2014, 36, 401–421. [Google Scholar] [CrossRef]
  17. Mohammed, M.A.; Abd Ghani, M.K.; Hamed, R.I.; Mostafa, S.A.; Ahmad, M.S.; Ibrahim, D.A. Solving vehicle routing problem by using improved genetic algorithm for optimal solution. J. Comput. Sci. 2017, 21, 255–262. [Google Scholar] [CrossRef]
  18. Caceres-Cruz, J.; Arias, P.; Guimarans, D.; Riera, D.; Juan, A.A. Rich vehicle routing problem: Survey. ACM Comput. Surv. (CSUR) 2014, 47, 1–28. [Google Scholar] [CrossRef]
  19. Montoya-Torres, J.R.; Franco, J.L.; Isaza, S.N.; Jiménez, H.F.; Herazo-Padilla, N. A literature review on the vehicle routing problem with multiple depots. Comput. Ind. Eng. 2015, 79, 115–129. [Google Scholar] [CrossRef]
  20. Ritzinger, U.; Puchinger, J.; Hartl, R.F. A survey on dynamic and stochastic vehicle routing problems. Int. J. Prod. Res. 2016, 54, 215–231. [Google Scholar] [CrossRef]
  21. Elshaer, R.; Awad, H. A taxonomic review of metaheuristic algorithms for solving the vehicle routing problem and its variants. Comput. Ind. Eng. 2020, 140, 106242. [Google Scholar] [CrossRef]
  22. Escobar, J.W.; Linfati, R.; Toth, P.; Baldoquin, M.G. A hybrid granular tabu search algorithm for the multi-depot vehicle routing problem. J. Heuristics 2014, 20, 483–509. [Google Scholar] [CrossRef]
  23. Gendreau, M.; Ghiani, G.; Guerriero, E. Time-dependent routing problems: A review. Comput. Oper. Res. 2015, 64, 189–197. [Google Scholar] [CrossRef]
  24. Ghiani, G.; Guerriero, F.; Laporte, G.; Musmanno, R. Real-time vehicle routing: Solution concepts, algorithms and parallel computing strategies. Eur. J. Oper. Res. 2003, 151, 1–11. [Google Scholar] [CrossRef]
  25. Okulewicz, M.; Mańdziuk, J. A metaheuristic approach to solve dynamic vehicle routing problem in continuous search space. Swarm Evol. Comput. 2019, 48, 44–61. [Google Scholar] [CrossRef]
  26. Maryam Abdirad, K.K.; Gupta, D. A two-stage metaheuristic algorithm for the dynamic vehicle routing problem in Industry 4.0 approach. J. Manag. Anal. 2021, 8, 69–83. [Google Scholar]
  27. Mavrovouniotis, M.; Yang, S. Dynamic vehicle routing: A memetic ant colony optimization approach. In Automated Scheduling and Planning: From Theory to Practice; Springer: Berlin/Heidelberg, Germany, 2013; pp. 283–301. [Google Scholar]
  28. Mavrovouniotis, M.; Yang, S. Ant colony optimization with immigrants schemes for the dynamic vehicle routing problem. In Proceedings of the Applications of Evolutionary Computation: EvoApplications 2012: EvoCOMNET, EvoCOMPLEX, EvoFIN, EvoGAMES, EvoHOT, EvoIASP, EvoNUM, EvoPAR, EvoRISK, EvoSTIM, and EvoSTOC, Málaga, Spain, 11–13 April 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 519–528. [Google Scholar]
  29. Mavrovouniotis, M.; Yang, S. Ant colony optimization with memory-based immigrants for the dynamic vehicle routing problem. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, Australia, 10–15 June 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1–8. [Google Scholar]
  30. Housroum, H.; Hsu, T.; Dupas, R.; Goncalves, G. A hybrid GA approach for solving the dynamic vehicle routing problem with time windows. In Proceedings of the 2006 2nd International Conference on Information & Communication Technologies, Damascus, Syria, 24–28 April 2006; IEEE: Piscataway, NJ, USA, 2006; Volume 1, pp. 787–792. [Google Scholar]
  31. Garrido, P.; Riff, M.C. DVRP: A hard dynamic combinatorial optimisation problem tackled by an evolutionary hyper-heuristic. J. Heuristics 2010, 16, 795–834. [Google Scholar] [CrossRef]
  32. Khouadjia, M.R.; Alba, E.; Jourdan, L.; Talbi, E.G. Multi-swarm optimization for dynamic combinatorial problems: A case study on dynamic vehicle routing problem. In Proceedings of the International Conference on Swarm Intelligence, Brussels, Belgium, 8–10 September 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 227–238. [Google Scholar]
  33. Li, J.; Liu, R.; Wang, R. Elastic Strategy-Based Adaptive Genetic Algorithm for Solving Dynamic Vehicle Routing Problem with Time Windows. IEEE Trans. Intell. Transp. Syst. 2023, 24, 13930–13947. [Google Scholar] [CrossRef]
  34. Achamrah, F.E.; Riane, F.; Limbourg, S. Solving inventory routing with transshipment and substitution under dynamic and stochastic demands using genetic algorithm and deep reinforcement learning. Int. J. Prod. Res. 2022, 60, 6187–6204. [Google Scholar] [CrossRef]
  35. Xiang, X.; Tian, Y.; Zhang, X.; Xiao, J.; Jin, Y. A pairwise proximity learning-based ant colony algorithm for dynamic vehicle routing problems. IEEE Trans. Intell. Transp. Syst. 2021, 23, 5275–5286. [Google Scholar] [CrossRef]
  36. Roegiers, X. From Knowledge to Competency; Peter Lang Group AG: Brussels, Belgium, 2018. [Google Scholar]
  37. Lewin, K. Frontiers in group dynamics: Concept, method and reality in social science; social equilibria and social change. Hum. Relat. 1947, 1, 5–41. [Google Scholar] [CrossRef]
  38. Deb, K.; Agrawal, R.B. Simulated binary crossover for continuous search space. Complex Syst. 1995, 9, 115–148. [Google Scholar]
  39. Deb, K. Real-coded genetic algorithms with simulated binary crossover: Studies on multimodal and multiobjective problems. Complex Syst. 1995, 9, 431–454. [Google Scholar]
  40. Yang, S. Genetic algorithms with elitism-based immigrants for changing optimization problems. In Proceedings of the Workshops on Applications of Evolutionary Computation, Valencia, Spain, 11–13 April 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 627–636. [Google Scholar]
  41. Cobb, H.G. An Investigation into the Use of Hypermutation as an Adaptive Operator in Genetic Algorithms Having Continuous, Time-Dependent Nonstationary Environments; Naval Research Laboratory, Navy Center for Applied Research in Artificial: Washington, DC, USA, 1990. [Google Scholar]
  42. Yang, S.; Li, C. A clustering particle swarm optimizer for locating and tracking multiple optima in dynamic environments. IEEE Trans. Evol. Comput. 2010, 14, 959–974. [Google Scholar] [CrossRef]
  43. Korosec, P.; Silc, J. The differential ant-stigmergy algorithm applied to dynamic optimization problems. In Proceedings of the 2009 IEEE Congress on Evolutionary Computation, Trondheim, Norway, 18–21 May 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 407–414. [Google Scholar]
  44. Jin, Y.; Branke, J. Evolutionary optimization in uncertain environments-a survey. IEEE Trans. Evol. Comput. 2005, 9, 303–317. [Google Scholar] [CrossRef]
  45. Boschetti, M.A.; Maniezzo, V.; Strappaveccia, F. Route relaxations on GPU for vehicle routing problems. Eur. J. Oper. Res. 2017, 258, 456–466. [Google Scholar] [CrossRef]
  46. Fisher, M.L. Optimal solution of vehicle routing problems using minimum k-trees. Oper. Res. 1994, 42, 626–642. [Google Scholar] [CrossRef]
  47. Mavrovouniotis, M.; Yang, S.; Yao, X. A benchmark generator for dynamic permutation-encoded problems. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Taormina, Italy, 1–5 September 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 508–517. [Google Scholar]
  48. Mavrovouniotis, M.; Yang, S. Ant algorithms with immigrants schemes for the dynamic vehicle routing problem. Inf. Sci. 2015, 294, 456–477. [Google Scholar] [CrossRef]
Figure 1. The VRP for serving 9 customers on 3 tours.
Figure 1. The VRP for serving 9 customers on 3 tours.
Algorithms 17 00449 g001
Figure 2. The VRP with dynamically planned routes and customers.
Figure 2. The VRP with dynamically planned routes and customers.
Algorithms 17 00449 g002
Figure 3. Flowchart of the proposed algorithm (OCO).
Figure 3. Flowchart of the proposed algorithm (OCO).
Algorithms 17 00449 g003
Figure 4. Variations in α and β over generations.
Figure 4. Variations in α and β over generations.
Algorithms 17 00449 g004
Figure 5. Comparison of the diversity of OCO with that of four other algorithms.
Figure 5. Comparison of the diversity of OCO with that of four other algorithms.
Algorithms 17 00449 g005
Table 1. Some recent research related to the Static VRP.
Table 1. Some recent research related to the Static VRP.
ResearchersYearTitle
Cordeau et al. [12]2003A tabu search heuristic for the static multi-vehicle dial-a-ride problem
Escobar et al. [22]2014A hybrid granular tabu search algorithm for the multi-depot vehicle routing problem
Lhan et al. [11]2021An improved simulated annealing algorithm with crossover operator for capacitated vehicle routing problem
Abdulkader et al. [14]2015Hybridized ant colony algorithm for the multi compartment vehicle routing problem
Wang et al. [15]2016Novel ant colony optimization methods for simplifying solution construction in vehicle routing problems
Liu et al. [16]2014A hybrid genetic algorithm for the multi-depot open vehicle routing problem
Mazin et al. [17]2017Solving vehicle routing problem by using improved genetic algorithm for optimal solution
Table 3. Error values achieved for problem F1.
Table 3. Error values achieved for problem F1.
Technique UsedPeaks (m)ErrorsTimes
T1 T2 T3 T4 T5 T6 T7
SGA10Avg best 0.03 × 10 7 1.34 × 10 7 4.13 × 10 5 8.7 × 10 6 0.65 × 10 5 6.5 × 10 6 1.24 × 10 5
Avg worst30.5342.3864.1669.7432.1554.8437.05
Avg mean25.1118.0120.5431.4115.2331.4615.73
STD10.4514.1912.2323.397.6920.818.05
50Avg best 8.74 × 10 5 4.12 × 10 4 6.33 × 10 4 5.32 × 10 4 4.11 × 10 4 5.35 × 10 5 2.21 × 10 4
Avg worst41.3558.5361.2156.8444.6254.6123.62
Avg mean27.6120.4619.0927.1814.78318.9113.78
STD9.2810.3813.4121.084.3818.384.46
HMGA10Avg best00 8.71 × 10 7 0 5.66 × 10 5 0 2.31 × 10 6
Avg worst1.2421.6340.625.7819.394.9821.03
Avg mean5.0412.8926.358.1611.798.769.18
STD1.124.2317.122.754.662.154.02
50Avg best 1.34 × 10 6 1.52 × 10 5 3.41 × 10 6 6.23 × 10 5 1.57 × 10 5 5.43 × 10 6 2.45 × 10 5
Avg worst3.125.139.319.1710.245.764.83
Avg mean12.9215.2717.384.077.438.9715.49
STD2.465.429.573.286.843.374.03
RIGA10Avg best0 1.52 × 10 7 6.25 × 10 7 8.51 × 10 7 4.33 × 10 7 9.12 × 10 7 4.11 × 10 7
Avg worst1.353.758.515.165.9110.1817.38
Avg mean2.0212.8117.958.269.2961.465.89
STD1.426.2812.021.754.662.734.03
50Avg best0 2.52 × 10 6 1.03 × 10 5 3.51 × 10 6 4.71 × 10 5 4.36 × 10 6 0.52 × 10 5
Avg worst2.3214.037.2113.2722.1323.1530.98
Avg mean8.629.2716.378.696.649.476.49
STD1.984.0710.412.842.843.942.71
CPSO10Avg best 1.054 × 10 7 5.214 × 10 8 4.306 × 10 8 9.721 × 10 7 2.561 × 10 7 4.325 × 10 7 5.036 × 10 9
Avg worst1.24427.1228.153.23921.7226.5535.52
Avg mean0.035142.7184.1310.094441.8691.0564.54
STD0.42626.5238.9940.78554.4914.8059.119
50Avg best 2.447 × 10 6 2.061 × 10 7 9.888 × 10 7 4.353 × 10 7 2.121 × 10 6 9.033 × 10 5 4.169 × 10 6
Avg worst4.92222.0825.651.9749.60622.0827.9
Avg mean0.26243.2796.3190.1250.84811.4826.646
STD0.93625.3037.4420.38591.7794.3937.94
DASA10Avg best 4.17 × 10 13 3.80 × 10 13 3.80 × 10 13 6.57 × 10 13 5.56 × 10 13 7.90 × 10 13 3.55 × 10 14
Avg worst5.51 3.85 × 10 1 3.97 × 10 1 9.17 × 10 1 2.09 × 10 1 4.71 × 10 1 2.91 × 10 1
Avg mean 1.80 × 10 1 4.186.37 4.82 × 10 1 2.542.344.84
STD1.259.07 1.07 × 10 1 1.954.808.668.96
50Avg best 5.97 × 10 13 5.03 × 10 13 3.57 × 10 13 7.73 × 10 13 8.02 × 10 13 6.73 × 10 13 3.55 × 10 14
Avg worst7.67 2.91 × 10 1 315.5811.635.132.2
Avg mean 4.42 × 10 1 4.868.42 5.09 × 10 1 1.182.077.84
STD1.397.009.561.092.185.979.05
OCO10Avg best00000 0.15 × 10 14 0.52 × 10 14
Avg worst10.0110.5123.1213.1011.4521.1510.15
Avg mean1.202.854.452.643.151.851.50
STD0.121.253.250.851.250.150.05
50Avg best00 2.31 × 10 14 000 1.04 × 10 14
Avg worst19.2030.4544.4540.2025.8513.7451.55
Avg mean2.503.256.123.421.261.122.04
STD0.542.055.450.451.050.800.46
Table 4. Error values achieved for problem F2.
Table 4. Error values achieved for problem F2.
Technique UsedErrorsT1T2T3T4T5T6T7
SGAAvg best 2.1 × 10 5 3.14 × 10 5 5.83 × 10 5 9.58 × 10 5 0.91 × 10 5 6.5 × 10 5 0.94 × 10 5
Avg worst98.65458.12489.45123.15398.54123.15178.54
Avg mean34.1783.91128.5232.5494.28123.1578.54
STD10.4813.0215.1211.0212.4621.7844.38
HMGAAvg best 2.04 × 10 7 7.02 × 10 6 4.56 × 10 7 1.21 × 10 7 3.57 × 10 7 1.21 × 10 7 10.57 × 10 6
Avg worst7.4311.0521.727.869.298.2932.52
Avg mean6.7610.3119.628.2712.7617.2925.83
STD3.168.0911.252.638.617.3712.83
RIGAAvg best 1.68 × 10 7 6.07 × 10 6 4.96 × 10 7 2.31 × 10 7 4.67 × 10 7 2.11 × 10 7 11.97 × 10 6
Avg worst6.139.1518.376.8117.279.1320.43
Avg mean4.3611.3118.928.3710.2115.0811.13
STD2.627.0710.352.236.516.4510.53
CEPSOAvg best 9.377 × 10 5 7.423 × 10 5 4.651 × 10 5 1.121 × 10 5 7.792 × 10 5 1.087 × 10 4 2.978 × 10 7
Avg worst19.26144.1158.310.18320.726.0830.44
Avg mean1.24710.110.270.566425.141.9873.651
STD4.17835.0633.452.13764.255.2176.927
DASAAvg best 1.97 × 10 11 2.34 × 10 11 2.72 × 10 11 1.41 × 10 11 3.59 × 10 11 1.65 × 10 11 1.3 × 10 12
Avg worst 3.39 × 10 1 4.03 × 10 2 3.56 × 10 2 1.65 × 10 1 4.33 × 10 2 2.49 × 10 1 36.70
Avg mean3.30 2.56 × 10 1 1.89 × 10 1 1.45 4.96 × 10 1 2.113.87
STD8.78 8.32 × 10 1 6.78 × 10 1 3.83 1.12 × 10 5 5.298.12
OCOAvg best0 3.45 × 10 10 3.03 × 10 10 4.55 × 10 10 4.06 × 10 9 6.25 × 10 9 7.05 × 10 9
Avg worst20.1423.5420.2630.08100.15120.01101.54
STD40.7489.1536.0460.0280.0194.45100.01
Table 5. Error values achieved for problem F3.
Table 5. Error values achieved for problem F3.
Technique UsedErrorsT1T2T3T4T5T6T7
SGAAvg best 4.31 × 10 3 8.35 × 10 3 4.23 × 10 3 9.85 × 10 3 1.47 × 10 3 6.51 × 10 3 2.81 × 10 3
Avg worst792.45958.12925.451123.151043.54523.85878.94
Avg mean141.85582.51554.72497.23597.63229.93292.64
STD83.14140.13231.129896.5993.1577.42
HMGAAvg best 14.32 × 10 4 18.25 × 10 4 9.17 × 10 4 21.47 × 10 4 8.43 × 10 4 16.63 × 10 4 9.87 × 10 4
Avg worst91.84102197.14123.45198.15183.80178.24
Avg mean141.18245.89254.87121114.17115.73129.04
STD81.47101.1276.15119.8594.5793.7979.42
RIGAAvg best 4.31 × 10 4 7.74 × 10 4 2.47 × 10 4 15.47 × 10 4 4.23 × 10 4 8.95 × 10 5 6.83 × 10 4
Avg worst73.7496143.1498.85106.45123.75108.02
Avg mean23.2826.91134.7768.0998.74101.1397.08
STD41.4689.8256.9598.8764.0713.6959.72
CPSOAvg best0.003947126.242.89 7.909 × 10 5 228.54.3560.9334
Avg worst711.21008966.11204974.214241011
Avg mean137.5855.1765.9430.6859.7753653.7
STD221.6161235.8432.2121.5361.7334
DASAAvg best 3.39 × 10 11 4.34 × 10 1 1.38 4.51 × 10 11 3.08 4.21 × 10 11 0.106
Avg worst 4.35 × 10 2 9.88 × 10 2 9.37 × 10 2 1.17 × 10 3 9.23 × 10 2 1.47 × 10 3 9.09 × 10 2
Avg mean 1.57 × 10 1 8.24 × 10 2 6.88 × 10 2 4.35 × 10 2 6.97 × 10 2 6.26 × 10 2 4.33 × 10 2
STD 6.71 × 10 1 2.04 × 10 2 2.98 × 10 2 4.41 × 10 2 3.15 × 10 2 4.60 × 10 2 3.80 × 10 2
OCOAvg best 6.41 × 10 8 3.32 × 10 8 4.89 × 10 8 0.18 × 10 8 5.58 × 10 8 2.23 × 10 8 1.25 × 10 8
Avg worst106.05215.68220.06125.67218.83143.93200.87
Avg mean261.74221.57315.32332.03215.63240.76290.87
STD95.14100.50130.12210.45145.60120.15175.45
Table 6. Error values achieved for problem F4.
Table 6. Error values achieved for problem F4.
Technique UsedErrorsT1T2T3T4T5T6T7
SGAAvg best 2.13 × 10 3 6.34 × 10 4 15.12 × 10 3 9.97 × 10 3 11.17 × 10 3 21.54 × 10 3 10.714 × 10 3
Avg worst172.5248431.02351.62558.43261.27454.21
Avg mean141.41125.15214.54297.23184.79163.64102.34
STD103.1490.05112.02118156.49103.7452.48
HMGAAvg best 4.51 × 10 4 8.05 × 10 4 4.03 × 10 3 1.72 × 10 3 2.47 × 10 3 19.34 × 10 3 7.164 × 10 3
Avg worst151.84180287.13121.2298.15191.29164.34
Avg mean81.41125.15114.5497.2384.79121.74152.56
STD14.1835.8525.8914.8856.4863.64102.34
RIGAAvg best 1.71 × 10 5 2.45 × 10 5 2.13 × 10 5 2.28 × 10 5 0.41 × 10 5 9.38 × 10 5 5.34 × 10 5
Avg worst189.54195.76125.18121.2198.1281.39174.24
Avg mean21.67108.65119.8197.2384.29128.24107.04
STD10.6848.2535.8911.8677.9857.4492.96
CPSOAvg best 6.36 × 10 5 0.0001868 0.000103 9.346 × 10 6 0.000407 8.616 × 10 5 3.31 × 10 6
Avg worst29.38459.8389.414.6248163.0693.32
Avg mean2.67737.1536.670.792667.174.8817.792
STD7.05599.4397.182.775130.315.3919.21
DASAAvg best 2.01 × 10 11 2.95 × 10 11 2.87 × 10 11 1.85 × 10 11 5.89 × 10 11 2.09 × 10 11 7.10 × 10 11
Avg worst 5.76 × 10 1 5.05 × 10 2 5.40 × 10 2 1.88 × 10 1 5.28 × 10 2 3.97 × 10 1 4.51 × 10 2
Avg mean5.60 6.56 × 10 1 5.36 × 10 1 1.85 1.08 × 10 2 2.9827.4
STD 2.65 × 10 1 1.60 × 10 2 1.40 × 10 2 4.22 1.78 × 10 2 7.5990
OCOAvg best 5.31 × 10 9 9.08 × 10 10 5.89 × 10 10 2.04 × 10 10 2.04 × 10 10 2.27 × 10 9 1.14 × 10 9
Avg worst15.1540.8054.1590.0525.2040.0585.26
Avg mean10.1525.6021.1015.6040.6515.2345.58
STD5.3715.411.4510.1222.2111.156.72
Table 7. Error values achieved for problem F5.
Table 7. Error values achieved for problem F5.
Technique UsedErrorsT1T2T3T4T5T6T7
SGAAvg best 6.83 × 10 3 7.61 × 10 3 5.71 × 10 3 3.87 × 10 3 8.46 × 10 3 5.12 × 10 4 6.05 × 10 3
Avg worst181.53197.82112.15120.65189.84112.15201.24
Avg mean68.3571.57128.6292.84104.2991.2381.34
STD43.6755.7561.0250.3278.3264.2541.74
HMGAAvg best 1.58 × 10 4 3.22 × 10 4 3.33 × 10 4 4.85 × 10 5 1.37 × 10 4 2.07 × 10 4 2.05 × 10 5
Avg worst81.6497.89101.05138.75169.84102.75173.14
Avg mean88.2591.5868.52102.1493.3987.1374.54
STD33.6245.1521.7237.4238.3939.8514.74
RIGAAvg best 5.58 × 10 5 2.28 × 10 5 7.03 × 10 5 2.55 × 10 6 0.35 × 10 5 1.07 × 10 5 1.75 × 10 5
Avg worst71.6487.8271.2583.7569.8482.9573.14
Avg mean78.2581.5898.5272.1474.5947.5334.54
STD33.6745.8551.7230.4238.3919.8519.74
CPSOAvg best 0.0001584 0.0003224 0.0003337 4.85 × 10 6 0.0001377 0.0002077 2.052 × 10 6
Avg worst25.4131.7627.7726.6663.242.54103.2
Avg mean1.8552.8793.4031.0957.9864.0536.527
STD5.1816.7876.4484.86513.818.37122.8
DASAAvg best 3.22 × 10 11 3.74 × 10 11 3.86 × 10 11 2.69 × 10 11 5.99 × 10 11 2.85 × 10 11 1.93 × 10 12
Avg worst 1.71 × 10 1 2.22 × 10 1 1.60 × 10 1 8.10 2.90 × 10 1 8.7518.7
Avg mean 9.55 × 10 1 9.90 × 10 1 9.49 × 10 1 3.92 × 10 1 2.30 4.67 × 10 1 1.11
STD3.434.053.311.616.361.733.76
OCOAvg best00000 7.15 × 10 10 6.15 × 10 10
Avg worst74.1391.78104.77106.5192.81114.51125.35
Avg mean51.0880.2690.2970.6264.4771.9293.12
STD12.1221.3741.0849.4351.1531.2856.89
Table 8. Error values achieved for problem F6.
Table 8. Error values achieved for problem F6.
Technique UsedErrorsT1T2T3T4T5T6T7
SGAAvg best 1.31 × 10 2 4.44 × 10 3 9.61 × 10 3 1.48 × 10 2 0.93 × 10 3 5.04 × 10 3 0.724 × 10 3
Avg worst247.6558.2683.6723308.5530.1698.3
Avg mean63.88208.398.5363.74160.28153.45168.34
STD39.55129.6181.1848.42106.49187.95134.12
HMGAAvg best 5.34 × 10 3 2.14 × 10 3 6.31 × 10 3 9.78 × 10 3 0.23 × 10 3 1.37 × 10 3 0.22 × 10 3
Avg worst87.79252.5304.8198.6475.8265.7424.5
Avg mean51.1298.7180.2762.83113.5696.5695.63
STD10.9763.7733.8824.2360.6546.7675.91
RIGAAvg best 4.67 × 10 4 0.04 × 10 3 7.81 × 10 3 10.08 × 10 4 7.93 × 10 4 0.86 × 10 3 1.32 × 10 3
Avg worst77.81192.6214.9208.6365.7275.2364.7
Avg mean48.8288.61109.6752.89153.76169.96108.73
STD10.3753.6781.2820.03130.8552.0671.23
CPSOAvg best 0.0001693 0.000126 0.0006566 1.28 × 10 5 0.001835 0.0002852 0.0002053
Avg worst37.79258.5504.8131.8628.8265.7424.5
Avg mean6.72521.5727.139.2771.5723.6732.58
STD9.97463.5183.9824.23160.351.5576.9
DASAAvg best 2.36 × 10 11 3.58 × 10 11 3.69 × 10 11 2.55 × 10 11 6.37 × 10 11 2.56 × 10 11 6.48 × 10 12
Avg worst 4.83 × 10 1 5.54 × 10 2 5.29 × 10 2 8.16 × 10 1 4.99 × 10 2 2.49 × 10 2 1.37 × 10 2
Avg mean8.873726.79.7437.913.311.7
STD13.3 1.22 × 10 2 98.422 1.18 × 10 2 57.436.7
OCOAvg best 3.45 × 10 8 7.82 × 10 8 0.16 × 10 8 1.12 × 10 8 2.12 × 10 8 0.46 × 10 8 3.36 × 10 8
Avg worst18.6720.1524.6285.3650.6960.9343.17
Avg mean7.5830.8449.7267.0975.2428.5629.13
STD5.0917.6521.7815.7932.9322.7123.86
Table 9. Overall performance of the algorithms.
Table 9. Overall performance of the algorithms.
F1 (10)F1 (50)F2F3F4F5F6
SGA
T10.8520.8260.3160.0990.3250.2990.283
T20.7990.7770.2280.0180.1850.3530.265
T30.7680.7300.2910.0330.2590.3490.289
T40.6470.6600.2420.0500.2490.2760.383
T50.8510.8740.3150.0400.2390.3650.392
T60.5430.5040.2210.0430.2130.2710.380
T70.7370.7100.3340.0740.2870.3340.294
HMGA
T10.8720.8440.3710.2630.3360.3840.333
T20.8080.7940.2530.1110.2140.3660.292
T30.7460.7300.3180.2610.2570.4050.336
T40.6730.7090.3270.1510.2920.3090.260
T50.8310.8740.2990.2720.3880.5080.322
T60.5860.5250.2950.1390.2980.3330.248
T70.7630.7180.3960.1560.3370.4090.376
RIGA
T10.8890.7880.7690.4420.3880.4690.542
T10.8480.6780.6580.3850.3780.5580.345
T10.7460.7480.5120.3890.4480.4720.489
T10.7230.8580.3510.3860.4580.4610.389
T10.7580.5230.2560.3640.4230.3760.441
T10.4230.4580.4310.3990.4580.4610.487
T10.6580.5230.4560.4830.4630.4560.381
CPSO
T10.9580.9780.8580.2950.4780.6580.645
T20.8870.8450.7450.5610.5450.6450.561
T30.8450.8740.7810.4210.3740.6780.521
T40.7950.7120.6450.3280.5120.4450.378
T50.7910.6890.4320.4850.3890.5320.485
T60.6230.6580.6510.5890.5580.4510.489
T70.4580.5230.2560.5410.3230.4560.541
DASA
T10.9420.9410.7280.4630.6880.6650.789
T20.8920.8880.5750.3900.4700.6120.789
T30.8690.8380.5800.5260.4900.6030.432
T40.9770.9750.9000.3800.8830.8740.631
T50.8890.9180.5690.4720.4630.6090.655
T60.8820.8730.6440.4350.5690.5390.459
T70.8570.8300.5490.5590.5720.5890.414
OCO
T10.8580.8580.6940.7150.7520.8480.625
T20.6870.6450.6540.6450.6750.8980.789
T30.6350.8020.8850.6580.5970.7890.614
T40.7950.7790.7140.6080.6470.8950.585
T50.7910.7830.6210.5150.5460.5380.574
T60.7120.6680.5780.7490.5450.5890.658
T70.6590.6540.6540.6450.6580.7890.658
PerformanceSGAHMGARIGACPSODASAOCO
34.2739.1149.5557.5765.2168.75
Table 10. Instance F-n45-k4 with random traffic; f = 5 .
Table 10. Instance F-n45-k4 with random traffic; f = 5 .
Algorithmm Traffic State
0.10.250.50.75
OCO802.80810.40830.20835.10
ACO800.70950.55995.101010.50
HMGA900.601000.201110.001020.20
Table 11. Instance F-n45-k4 with random traffic; f = 100 .
Table 11. Instance F-n45-k4 with random traffic; f = 100 .
Algorithmm Traffic State
0.10.250.50.75
OCO810.10802.00804.80805.30
ACO807.90808.10810.10820.60
HMGA850.60850.80855.00860.20
Table 12. Instance F-n472-k4 with random traffic; f = 5 .
Table 12. Instance F-n472-k4 with random traffic; f = 5 .
Algorithmm Traffic State
0.10.250.50.75
OCO265.40270.50287.35290.15
ACO300.10320.80360.80385.50
HMGA350.60400.00410.00420.30
Table 13. Instance F-n72-k4 with random traffic; f = 100 .
Table 13. Instance F-n72-k4 with random traffic; f = 100 .
Algorithmm Traffic State
0.10.250.50.75
OCO270.90272.60275.20276.20
ACO290.70295.10300.70308.90
HMGA300.80310.20315.50320.10
Table 14. Instance F-n135-k7 with random traffic; f = 5 .
Table 14. Instance F-n135-k7 with random traffic; f = 5 .
Algorithmm Traffic State
0.10.250.50.75
OCO1330.501355.101370.601380.00
ACO1355.501354.301364.601385.40
HMGA1300.201430.201440.101450.20
Table 15. Instance F-n135-k7 with random traffic; f = 100 .
Table 15. Instance F-n135-k7 with random traffic; f = 100 .
Algorithmm Traffic State
0.10.250.50.75
OCO1270.501285.401290.601298.00
ACO1290.101305.201310.301320.60
HMGA1460.601470.201475.001500.10
Table 16. Instance F-n45-k4 with cyclic traffic; f = 5 .
Table 16. Instance F-n45-k4 with cyclic traffic; f = 5 .
Algorithmm Traffic State
0.10.250.50.75
OCO800.80810.70820.70835.30
ACO831.60840.70850.00860.60
HMGA800.40820.20950.00960.10
Table 17. Instance F-n45-k4 with cyclic traffic; f = 100 .
Table 17. Instance F-n45-k4 with cyclic traffic; f = 100 .
Algorithmm Traffic State
0.10.250.50.75
OCO800.80803.60804.30815.70
ACO808.05810.790815.70820.80
HMGA880.10890.80900.00920.30
Table 18. Instance F-n72-k4 with cyclic traffic; f = 5 .
Table 18. Instance F-n72-k4 with cyclic traffic; f = 5 .
Algorithmm Traffic State
0.10.250.50.75
OCO280.10288.20289.40290.70
ACO270.50275.00296.50298.40
HMGA320.60340.20340.00360.80
Table 19. Instance F-n72-k4 with cyclic traffic; f = 100 .
Table 19. Instance F-n72-k4 with cyclic traffic; f = 100 .
Algorithmm Traffic State
0.10.250.50.75
OCO270.80271.30274.80278.50
ACO280.40281.70282.80282.80
HMGA300.60310.10350.00350.80
Table 20. Instance F-n135-k7 with cyclic traffic; f = 5 .
Table 20. Instance F-n135-k7 with cyclic traffic; f = 5 .
Algorithmm Traffic State
0.10.250.50.75
OCO1320.301330.401340.201351.50
ACO1310.801320.301339.101365.40
HMGA1350.601400.201410.201440.30
Table 21. Instance F-n135-k7 with cyclic traffic; f = 100 .
Table 21. Instance F-n135-k7 with cyclic traffic; f = 100 .
Algorithmm Traffic State
0.10.250.50.75
OCO1273.601280.501290.301295.10
ACO1290.801300.501308.101310.00
HMGA1400.601420.051450.101460.00
Table 22. Instance F-n135-k7 with cyclic traffic; f = 100 .
Table 22. Instance F-n135-k7 with cyclic traffic; f = 100 .
Algorithmm Traffic State
0.10.250.50.75
OCO13,273.601280.501290.301295.10
ACO1290.801300.501308.101310.00
HMGA1330.101450.201470.001500.20
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ben Jelloun, R.; Jebari, K.; El Moujahid, A. Open Competency Optimization: A Human-Inspired Optimizer for the Dynamic Vehicle-Routing Problem. Algorithms 2024, 17, 449. https://doi.org/10.3390/a17100449

AMA Style

Ben Jelloun R, Jebari K, El Moujahid A. Open Competency Optimization: A Human-Inspired Optimizer for the Dynamic Vehicle-Routing Problem. Algorithms. 2024; 17(10):449. https://doi.org/10.3390/a17100449

Chicago/Turabian Style

Ben Jelloun, Rim, Khalid Jebari, and Abdelaziz El Moujahid. 2024. "Open Competency Optimization: A Human-Inspired Optimizer for the Dynamic Vehicle-Routing Problem" Algorithms 17, no. 10: 449. https://doi.org/10.3390/a17100449

APA Style

Ben Jelloun, R., Jebari, K., & El Moujahid, A. (2024). Open Competency Optimization: A Human-Inspired Optimizer for the Dynamic Vehicle-Routing Problem. Algorithms, 17(10), 449. https://doi.org/10.3390/a17100449

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop