Next Article in Journal
Modeling and Analysis of the Influence of Fear on the Harvested Modified Leslie–Gower Model Involving Nonlinear Prey Refuge
Next Article in Special Issue
A Non-Invasive Method to Evaluate Fuzzy Process Capability Indices via Coupled Applications of Artificial Neural Networks and the Placket–Burman DOE
Previous Article in Journal
Significance of the Coriolis Force on the Dynamics of Carreau–Yasuda Rotating Nanofluid Subject to Darcy–Forchheimer and Gyrotactic Microorganisms
Previous Article in Special Issue
Examining Consumer’s Intention to Adopt AI-Chatbots in Tourism Using Partial Least Squares Structural Equation Modeling Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Stochastic Team Orienteering Problem with Position-Dependent Rewards

1
Department of Management, Universitat Politècnica de Catalunya—BarcelonaTech, 08028 Barcelona, Spain
2
Department of Economics, Quantitative Methods and Economic History, Universidad Pablo de Olavide, 41013 Seville, Spain
3
Department of Applied Statistics and Operations Research, Universitat Politècnica de València, 03801 Alcoy, Spain
4
Department of Industrial Engineering and Management Science, Universidad de Sevilla, 41092 Seville, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(16), 2856; https://doi.org/10.3390/math10162856
Submission received: 20 July 2022 / Revised: 5 August 2022 / Accepted: 9 August 2022 / Published: 10 August 2022

Abstract

:
In this paper, we analyze both the deterministic and stochastic versions of a team orienteering problem (TOP) in which rewards from customers are dynamic. The typical goal of the TOP is to select a set of customers to visit in order to maximize the total reward gathered by a fixed fleet of vehicles. To better reflect some real-life scenarios, we consider a version in which rewards associated with each customer might depend upon the order in which the customer is visited within a route, bonusing the first clients and penalizing the last ones. In addition, travel times are modeled as random variables. Two mixed-integer programming models are proposed for the deterministic version, which is then solved using a well-known commercial solver. Furthermore, a biased-randomized iterated local search algorithm is employed to solve this deterministic version. Overall, the proposed metaheuristic algorithm shows an outstanding performance when compared with the optimal or near-optimal solutions provided by the commercial solver, both in terms of solution quality as well as in computational times. Then, the metaheuristic algorithm is extended into a full simheuristic in order to solve the stochastic version of the problem. A series of numerical experiments allows us to show that the solutions provided by the simheuristic outperform the near-optimal solutions obtained for the deterministic version of the problem when the latter are used in a scenario under conditions of uncertainty. In addition, the solutions provided by our simheuristic algorithm for the stochastic version of the problem offer a higher reliability level than the ones obtained with the commercial solver.

1. Introduction

The TOP was initially proposed by Chao et al. [1]. In its basic version, a fixed fleet of vehicles, initially located at an origin depot, have to select a series of customers to be visited on their way towards a destination depot. Visiting a customer for the first time generates a reward, and the goal is to maximize the total collected reward. Since there is a maximum time or distance that each vehicle can cover, it is likely that not all potential customers can be visited. Hence, the proper selection of the customers to be included in each vehicle’s route, as well as the order in which they have to be visited, constitute a challenge for the decision maker. The TOP is gaining momentum due to its applications to the coordination of multiple unmanned aerial vehicles and self-driving electric vehicles, the batteries of which have a limited driving range [2,3,4].
In this paper, we consider a version of the TOP that combines both stochastic and position-dependent components. In particular, stochastic travel times are considered random variables, and reward values can change according to the position of the associated customer in the route, i.e., a TOP with position-dependent rewards (TOP-PDR). This effect can be found in many real-life applications, i.e., whenever the benefits gathered by visiting a customer (e.g., a local retailer) early in the morning might be higher than the ones obtained from the same customer if it is visited at the end of the day. Hence, for instance, Herrera et al. [5] discuss a real-life application where pharmaceutical retailers are happy if they are visited early in the morning and, inversely, where the distributor has to assume a penalty cost whenever a customer is visited during the last hours of the day. To the best of our knowledge, despite its relevance in practical applications, this enriched variant of the stochastic and position-dependent TOP (STOP-PDR) has not yet been analyzed in the literature. Hence, the main contributions of our work can be described as follows: (i)the introduction of a variant of the classical TOP which considers both stochastic and position-dependent elements; (ii) a mathematical model for the TOP-PDR, in which rewards are deterministic in nature but are position-dependent as well; (iii) the use of both exact and metaheuristic methods to solve the deterministic and position-dependent version of the optimization problem; and (iv) after validating the quality of our metaheuristic algorithm, we present its extension into a simheuristic [6] in order to solve the STOP-PDR. As illustrated in Hatami et al. [7], simheuristics combine metaheuristic algorithms with simulations to solve stochastic optimization problems.
The rest of the paper is structured as follows: Section 2 reviews some related work on the stochastic TOP. Section 3 describes, in a formal way, the deterministic and stochastic versions of the TOP considered in this manuscript, proposing a mathematical formulation for each case. Section 4 proposes a biased-randomized iterated local search (BR-ILS) algorithm [8,9] to solve the deterministic TOP-PDR. It also extends the proposed BR-ILS algorithm into a simheuristic by incorporating Monte Carlo simulation (MCS). Section 5 starts by applying a truncated branch-and-cut (B&C) algorithm and several pre-processing procedures to solve a set of instances of the TOP-PDR, and finishes by testing the BR-ILS, comparing its results with those provided by the B&C algorithm. In Section 6 we analyze the results gathered with the simheuristic algorithm for the STOP-PDR. Finally, in Section 7 we highlight the main conclusions of this paper and propose some open lines of research.

2. Related Work

The classical version of the TOP was initially proposed by Chao et al. [1] as a multi-vehicle extension of the orienteering problem (OP) introduced by Golden et al. [10]. Both problems are NP-hard. Although most studies focus on deterministic versions of the TOP, either using exact methods [11] or approximate ones [12], stochastic and position-dependent versions have received much less attention so far, mainly due to the additional complexity involved. Some examples of exact methods employed to solve deterministic TOPs are B&C methods [13] and branch-and-cut-and-price methods [11]. However, since the number of TOP instances that can be solved by employing exact methods is typically limited to a few hundred nodes, many metaheuristics approaches have been proposed to solve larger instances. Among these approaches, we can note the tabu search and variable neighborhood search algorithms [12], particle swarm optimization [13], simulated annealing [14], and genetic algorithms [15].
Despite being more realistic than its deterministic counterpart, the stochastic TOP has only been explored during recent years. Hence, Panadero et al. [2] considered a TOP with stochastic travel times, and proposed a simheuristic algorithm, which combined a variable neighbourhood search (VNS) metaheuristic with MCS to efficiently deal with the stochastic TOP. In a similar way, Mei and Zhang [16] proposed a genetic programming hyper-heuristic to solve a stochastic TOP with time windows, where service times at each node were modeled as random variables. Furthermore, Song et al. [17] modeled a subscription delivery problem as a stochastic TOP with time windows and consistency in the driver assigned to each customer. Likewise, stochastic approaches have been employed in relation to the OP, which is a simplified version of the TOP that considers one single route [18]. In this sense, Bian and Liu [19] analyzed the operational-level stochastic orienteering problem, in which travel and service times were stochastic. The routing plan could be adjusted in real time, so that the vehicle could increase the collected reward and the probability of on-time arrival. Similarly, Dolinskaya et al. [20] dealt with an extension of the OP with stochastic travel times. In their version, it was possible to increase the likelihood of collecting a greater reward by adapting paths between reward nodes as travel times were revealed. Other stochastic approaches to the OP include those studied in Thayer and Carpin [21] and Thayer and Carpin [22]. In these cases, travel times between pairs of nodes were continuous random variables and the reward functions for each state/action pair were equivalent regardless of the node visiting time or order.
Regarding position-dependent versions of the TOP, Reyes-Rubiano et al. [23] proposed a biased-randomized learnheuristic for solving the TOP with position-dependent rewards. In their work, rewards associated with each node were given by unknown (black-box) functions, the values of which were estimated based on existing observations and using learning mechanisms. Bayliss et al. [3] discussed a TOP for modeling the coordination of drones with aerial motion constraints. In this case, travel times were path-dependent, and a learnheuristic approach was proposed to deal with the complexity associated with the position-dependent components of the problem. Other time-dependent TOPs, also including time windows, have been discussed in Gavalas et al. [24] and Yu et al. [25]. To the best of our knowledge, however, no previous studies have considered both a position-dependent and stochastic version of the TOP, like the one discussed in this paper.

3. Problem Details

In this section we first provide a formal description of the versions of the TOP being considered here: deterministic and stochastic versions of the TOP with position-dependent rewards. Then, we provide a mathematical model for both the deterministic and stochastic variants.

3.1. Formal Description

In a TOP, a fleet M composed of | M | 2 vehicles is considered. There is a time threshold, t m a x > 0 , for completing each route. The set of nodes that can be visited is denoted by N = { 1 , 2 , , n } { o , d } , where o and d refer to the origin and destination depots, respectively. Vehicles travel along the set of arcs A which connect all the nodes. The TOP-PDR is therefore formulated using a directed graph G = ( N , A ) . All vehicle tours begin at node o and end at node d. Each non-depot node can be visited at most once and by only one vehicle. In the TOP, a reward u i 0 is received when node i { 1 , 2 , , n } is visited. In our position-dependent version, if node i is the first non-depot node in a vehicle’s tour, then a bonus B i is added to the reward u i obtained for visiting that node. If node i is the last non-depot node in a vehicle’s tour, then a penalty P i u i is subtracted from the reward u i . In the STOP-PDR, traversing an arc ( i , j ) A implies a stochastic travel time, T i j > 0 . In this work, edge traversal times follow log-normal probability distributions. That is, T i j l o g n o r m a l μ i j , σ i j , e = ( i , j ) A , where μ i j and σ i j are the parameters of the lognormal distribution.
As illustrated in Figure 1, the final solution to the problem is a set of routes, where each route is defined by an array of nodes starting from node o (the origin depot) and arriving at node d (the end depot).

3.2. Mathematical Models

The deterministic version of the TOP-PDR assumes constant travel times T i j = t i j as well as deterministic bonuses B i = b i and penalties P i = p i , the latter being given as a percentage of the original node reward u i . The problem thus becomes a deterministic TOP with position-dependent rewards. The binary variable x i j m takes the value 1 if the link ( i , j ) is traversed by vehicle m; otherwise, it takes the value of 0. In order to eliminate subtours, we consider continuous variables y i m , which will help by ordering nodes at each tour m. Considering that δ + ( i ) and δ ( i ) are the sets of successors and predecessors of node i N , respectively, this deterministic version of the problem can be formulated as follows:
maximize m M ( i , j ) A j d x i j m · u j + m M j δ + ( o ) j d x o j m · b j m M j δ ( d ) j o x j d m · p j
subject to:
j δ + ( o ) x o j m = 1 m M , j N \ { d }
j δ ( o ) x j o m = 0 m M , j N
j δ ( d ) x j d m = 1 m M , j N \ { o }
j δ + ( d ) x d j m = 0 m M , j N
i δ ( j ) x i j m = k δ + ( j ) x j k m m M , j N \ { o , d }
m M i δ ( j ) x i j m 1 m M , j N \ { o , d }
( i , j ) A x i j m · t i j t m a x m M
y i m y j m + | N | · x i j m | N | 1 m M , i , j N \ { o , d }
In this formulation, the objective function (1) maximizes the total collected reward. This is composed of three terms: (i) the sum of the basic reward collected at the visited nodes; (ii) the corresponding bonus collected at the first nodes in each route; and (iii) the sum of the penalties associated with the last nodes in each route, which is subtracted from the accumulated reward. Constraints (2) and (3) refer to the origin depot o, the first indicating that all vehicles m M must depart from o, and the second ensuring that no vehicle may arrive at the origin depot o. Analogously, Constraints (4) and (5) refer to the end depot d, ensuring that all vehicles must end their route at node d, and that no vehicle may depart from this node, respectively. Constraints (6) set the node balance at each non-depot node, guaranteeing that the number of incoming arcs equals the number of outgoing arcs. Constraints (7) allow at most one vehicle visit per node. Constraints (8) establish the time threshold, t m a x , on each route length. Constraints (9) are the subtour elimination constraints (based on Miller et al. [26]), in which variables y i m set a visiting order for nodes on each route m. These constraints ensure that all nodes in a route are connected by a path within the route.
For the STOP-PDR, the objective function would be the expected value of the sum of the collected rewards, which needs to be maximized without exceeding the threshold value defined for each route length. In a more formal way, it can be written as:
maximize E m M ( i , j ) A : j d x i j m · u j + m M j x o j m · B j m M j x j d m · P j
Note that the collected rewards u j and the bonus and penalties are affected by the order of visiting the nodes, which will definitively depend on the stochastic travel times. The stochastic TOP-PDR constraints are similar to the ones previously defined for the deterministic case, with the exception of Constraint (8), since travel times are now random variables. Therefore, for the stochastic version, Constraints (2) to (7) and (9) must also be fulfilled, whereas the Constraint set (8) is replaced by the Constraint set (11), which restricts the probability of exceeding the time threshold t m a x on each route length:
P r ( i , j ) E x i j m · T i j > t m a x α m M ,
with α being as small as desired. This formulation follows the risk-sensitive approach of Varakantham et al. [27].

4. A Biased-Randomized Iterated Local Search Simheuristic

In this section, we propose a biased-randomized algorithm to solve the deterministic and position-dependent version of the TOP-PDR. This algorithm is also combined with MCS in order to transform it into a simheuristic, which is capable of solving the stochastic and position-dependent version. Our methodology for the deterministic TOP-PDR combines biased-randomized (BR) techniques with an iterated local search (ILS) metaheuristic. Hence, it will be denoted as BR-ILS. As discussed in Rabe et al. [28] and Reyes-Rubiano et al. [29], metaheuristic algorithms can be easily extended into simheuristics. In addition, these frameworks can work with a reduced number of parameters, thus avoiding time-consuming tuning processes while providing an excellent trade-off between simplicity and performance—including computational times.
Algorithm 1 outlines the main components of our final simheuristic algorithm, which is composed of three phases. In the first phase (line 1), a feasible initial solution is generated using the constructive heuristic proposed in Panadero et al. [2]. This heuristic extends the popular ‘savings’ concept for the vehicle routing problem, so it can take into account the specific characteristics of the TOP, i.e., (i) there might be different nodes to represent the origin and destination depots; (ii) it is not mandatory to service all customers; and (iii) the collected reward—and not just the savings in time or distance—must be also considered during the construction of the routing plan.
Algorithm 1 BR-ILS Simheuristic ( i n p u t s , β , K m a x , T 0 , t s t o p )
  1:
initSol ← genInitSol(inputs) % Deterministic heuristic
  2:
baseSolinitSol
  3:
bestSolbaseSol
  4:
fastSimulation(baseSol)% MCS
  5:
TT0
  6:
while (timetstop) do % ILS stage
  7:
    k← 1
  8:
    while (kKmax) do
  9:
        shakedSol ← shaking(baseSol, k, β) % BR heuristic
10:
        newSollocalSearch1(shakedSol)
11:
        newSollocalSearch2(newSol)
12:
        newSollocalSearch3(newSol)
13:
        newSolApplyBonusAndPenalties3(newSol, B, P)
14:
        if (detCost(newSol) > detCost(baseSol)) then
15:
           fastSimulation(newSol) % MCS
16:
           if (stochCost(newSol) > stochCost(baseSol)) then
17:
               baseSolnewSol
18:
               k ← 1
19:
               if (stochCost(newSol) > stochCost(bestSol)) then
20:
                   bestSolnewSol
21:
                   insert(poolBestSol,bestSol)
22:
               end if
23:
           end if
24:
        else% SA-based acceptance criterion
25:
           temperature ← calcTemperature(detCost(newSol), detCost(baseSol), T)
26:
           if ( U (0,1) ≤ temperature) then
27:
               baseSolnewSol
28:
               kKmin
29:
           else
30:
               k← k + 1
31:
           end if
32:
        end if
33:
        T ← λT
34:
    end while
35:
end while
36:
for (sol ∈ poolBestSol) do % Refinement stage - MCS
37:
    deepSimulation(sol)
38:
    if (stochCost(sol) < stochCost(bestSol)) then
39:
        bestSolsol
40:
    end if
41:
end for
42:
return bestSol
Algorithm 2 describes the constructive heuristic employed. First, an initial dummy solution (line 1) is generated. This dummy solution contains one route per customer. Thus, for each customer a vehicle departs from the origin depot, visits the customer, and then resumes its trip towards the destination depot. If any route in this dummy solution does not satisfy the driving-range constraint, then the associated customer is discarded from the problem. Next, the ‘enriched savings’ associated with each edge connecting two different customers are computed (line 2), i.e., the benefits obtained by visiting both customers in the same route instead of using two distinct routes. In order to compute these enriched savings associated with an edge, both the travel time (or distance) required to traverse that edge, as well as the aggregated reward generated by visiting both customers, must be considered. Each edge will have two associated savings to consider, depending on the actual direction in which the edge is traversed. Thus, each edge will generate two different arcs. When all the savings have been computed, their associated arcs are then sorted in descending order, i.e., from the highest saving to the lowest saving. At this point, an iterative process is initiated (line 3). In each iteration, the arc at the top of the saving list is selected (line 4). This arc connects two routes, which are merged into a new route if this new route does not violate the driving-range constraint (line 9). This route-merging process is carried out until the savings list is empty. Finally, the list of routes is sorted according to the total reward provided (line 15), selecting as many routes from this list as possible—taking into account the restricted number of vehicles in the fleet–, thus generating the initial solution (initSol). Finally, the initSol is copied into baseSol and bestSol.
Algorithm 2 Heuristic approach
  1:
sol ← generateDummySolution(Inputs)
  2:
EnrichedSavingList ← computeSortedSavingList(Inputs, α )
  3:
while (EnrichedSavingList is not empty) do %Starts route-merging process
  4:
    arc ← selectNextArc(EnrichedSavingList, β )
  5:
    iRoute ← getStartingRoute(arc)
  6:
    jRoute ← getClosingRoute(arc)
  7:
    newRoute ← mergeRoutes(iRoute, jRoute)
  8:
    timeNewRoute ← calcRouteTravelTime(newRoute)
  9:
    isMergeValid ← validateMergeDrivingConsts(timeNewRoute, drivingRange)
10:
    if (isMergeValid) then
11:
        sol ← updateSolution(newRoute, iRoute, jRoute, sol)
12:
    end if
13:
    deleteEdgeFromSavingList(arc)
14:
end while
15:
sortRoutesByProfit(sol)
16:
deleteRoutesByProfit(sol, maxVehicles)
17:
return sol
During the second phase of our approach, a BR-ILS metaheuristic is applied in order to improve this initSol by iteratively exploring the search space. Note that we integrate the BR-ILS component with MCS techniques to assess the behavior of the obtained solutions in stochastic scenarios. As shown in Algorithm 1, the process starts by applying a diversification method (shaking) on the baseSol in order to obtain a new one (shakedSol, line 9). This process is dependent on the value of k, which represents the degree of destruction to be applied in the shaking phase. During this process, k adjacent routes are selected at random from the base solution (baseSol), and their corresponding customers are unassigned. Next, in order to complete this partial solution, we apply the constructive biased-randomized version of the heuristic used in the previous step to generate the initSol. Biased-randomized techniques induce non-uniform random behavior in the heuristic by employing skewed probability distributions, thus transforming the deterministic heuristic into a probabilistic algorithm without losing the logic behind the original heuristic. Hence, we avoid obtaining the same solution in every iteration. In our case, this biased-randomization process is introduced by employing a geometric probability distribution with a parameter β ( 0 < β < 1 ). The value of this parameter was set after a quick trial-and-error process, in which we observed good performance for the β value of 0.3. Note that the constructive heuristic described here always generates a feasible solution. In effect, by initially assigning only one customer to each vehicle, the dummy solution does not violate the maximum-time-per-route constraint. Therefore, the constructive procedure only allows route-merging processes insofar as they do not transform the incumbent solution into an unfeasible one.
Following the shaking procedure, the algorithm starts a local search process around the shakedSol to find a local minimum within the defined neighborhood structure (lines 10–12). In this stage, three operators are performed in a sequential way. First, a traditional intra-route 2-opt local search is executed. This operator is applied to each route until it cannot be further improved, before moving to the next route. After that, a cache memory mechanism that records the best-found-so-far routes is used to achieve a faster convergence. This mechanism is implemented using a hash map data structure, which is constantly updated whenever a better route is found by the algorithm. The second operator removes a subset of nodes from each route. The number of nodes to delete is selected randomly between 5 and 10 percent of the total number of nodes in the route. There are three different mechanisms to choose which nodes should be removed: (i) completely random; (ii) nodes with the highest rewards; and (iii) nodes with the lowest rewards. The specific mechanism used is selected randomly in each iteration of the algorithm. The last operator is a biased insertion algorithm, similar to those proposed by [13], which tries to improve the routes obtained by the previous operator. Iteratively for each route, the operator strives to insert new nodes in the route. Therefore, starting with the first node of the route, the next node is selected from the list of non-served nodes and inserted into the route (assuming that the driving-range constraints are not violated). In order to select the node to insert, the algorithm takes into account the ratio between the added duration and the additional reward, as given in Equation (12). In this equation, it is assumed that a node i is being inserted between nodes j and h in a route:
( t j i + t i h t j h ) / u i
Once the local search process has been completed, a new solution (newSol) is returned, and the bonuses (B) and penalties (P) are applied to the corresponding nodes of each route as a percentage of the original node reward (line 13). So far, this newSol has been deterministic. In order to deal with the stochastic nature of the problem, each time a newSol improves the baseSol in terms of the deterministic reward (i.e, newSol qualifies as a ‘promising’ solution in a deterministic scenario), it is sent through a fast simulation process to estimate its associated expected reward under uncertainty (line 15). Hence, a short number of MCS runs is conducted on newSol to obtain rough estimates of its behavior under stochastic conditions. Moreover, this simulation process provides feedback that can be used by the metaheuristic to better guide the search. Indeed, the selection of the base and best solutions is driven by the results of the simulation process. If the stochastic reward of newSol is also able to improve the stochastic reward of baseSol, the latter is updated, and the k parameter is reset to 1 (lines 17–18). In the same way, if the stochastic reward of newSol improves the stochastic reward of (bestSol), the latter is updated (step 19) and added to the pool of elite solutions (line 20). Through this stage, a reduced pool of ‘promising’ elite solutions is obtained. With the purpose of diversifying the search, the algorithm can accept solutions that are worse than the current one (lines 25 to 33), following an acceptance criterion based on a simulated annealing process with a decaying probability regulated by a temperature parameter (T), which is updated at each iteration. Finally, whenever the maximum computing time allowed ( t s t o p ) has not yet been reached, the previous steps are repeated in order to generate as many new promising solutions as possible and to assess their quality via simulation.
Finally, during the last phase of our approach, a refinement procedure using a larger number of MCS runs is applied to the pool of elite solutions. This allows us to have a better assessment of the solutions, hence obtaining a more accurate estimation of the expected total reward. Since the number of generated solutions during the search can be large, and the simulation process is time-consuming, we limit the number of MCS runs to be executed. In our computational experiments, the number of runs to be executed during phases 2 (exploratory) and 3 (intensive) have been set to 1000 and 50,000, respectively.

5. Solving the Deterministic and Position-Dependent TOP-PDR

In order to validate the use of the BR-ILS algorithm in solving the deterministic TOP-PDR, we first solve the formulation presented in Section 3.2 and then compare the results with those obtained using our BR-ILS. To this end, we have used part of the widely used TOP benchmark instances presented by Chao et al. [1]. This benchmark set includes a total of 320 instances and is divided into seven different subsets. The instances will be referred to as p i . m . k , where p i represents the identifier of the subset providing its coordinates, m indicates the number of vehicles, and k stands for different values of the maximum driving range in increasing order. As explained above, the deterministic TOP-PDR assumes constant travel times, t i j , as well as constant bonuses b i and penalties p i . Bonuses and penalties are given by a percentage of the original node reward, u i . In our computational experiments, this percentage is set to 5% in both cases.
First, a B&C algorithm, implemented in the Gurobi commercial solver, was employed to solve the model introduced in Section 3.2. The algorithm was stopped after one hour of computation using an Intel i 7 processor with 8GB of RAM. In order to reduce the size of the problem, we incorporated a pre-processing phase and a valid inequality. As exposed in El-Hajj et al. [30], one simple way of reducing the size of the problem is to deal only with accessible customers and arcs. We tested two main types of pre-processing by disregarding nodes and arcs, respectively, which could not be visited before the t m a x time threshold. We also made use of a valid inequality on the global duration of the routes, as used in Bianchessi et al. [31]. The average results obtained for 59 out of the 60 benchmark instances p 3 are shown in Table 1. Note that p 3.4 . a was not considered since it was unfeasible when the number of routes was forced to be exactly | M | (Constraints (2) and (4)). The first column presents the variations of the formulation used, with F 1 being the original formulation presented in Section 3.2. The second column indicates the value of the objective function (average reward), whereas the third column represents the computation time in seconds. Finally, the last column represents the optimality gap. More specifically, the main variations of the original formulation indicated in the first column are as follows:
  • Node pre-processing (Prep_nodes): We only considered the nodes that could be visited during the allowed time interval, that is, nodes i N were disregarded if t o i + t i d > t m a x
  • Arc preprocessing (Prep_arcs): We only considered arcs ( i , j ) A that could be visited during the allowed time interval. That is, variables x i j m were disregarded for all m M if t o i + t i j + t j d > t m a x
  • Valid inequality (VI) on the global duration of the routes, i.e.: m M ( i , j ) A x i j m · t i j | M | · t m a x
Table 1. Results obtained for 59 out of 60 p3 instances.
Table 1. Results obtained for 59 out of 60 p3 instances.
FormulationRewardRun Time (s)Gap
F1413.712111.4410.69
F1 + Prep_nodes415.971944.516.29
F1 + Prep_arcs413.682166.4210.87
F1 + VI416.592127.9410.71
F1 + Prep_nodes + Prep_arcs416.141847.966.06
F1 + Prep_nodes + VI415.241915.396.20
F1 + Prep_nodes + Prep_arcs + VI417.431807.055.56
Observe that, with the exception of F1 + Prep_arcs, all variations outperformed the original formulation F1 in terms of the objective function. Even though F1 + Prep_arcs yielded worse results than F1, the combination of arc and node pre-processing outperformed the results obtained by F1 and F1 + Prep_nodes. As expected, the best results were obtained when both node and arc pre-processing were applied together with the valid inequality. Hereafter, we will use the term F2 to denote the formulation F1 + Prep_nodes + Prep_arcs + VI.
To test the performance of the BR-ILS algorithm, a comparison of the results obtained by the truncated B&C (applied to F2) and BR-ILS approaches for the small TOP benchmark instances p 1 , p 2 , and p 3 was performed, with the results shown in Table 2, Table 3 and Table 4, respectively. The first column indicates the instance pi.m.k. Columns 2 and 3 indicate, respectively, the objective function (profit) and the computation time (in seconds) achieved by the BR-ILS. Columns 4 to 6 stand for the profit, the computation time (in seconds), and the optimality gap achieved by the truncated B&C procedure. The last column presents the relative percentage deviation (RPD) between the metaheuristic and the exact procedure.
Note that, for some of the smallest instances ( p 1 , p 2 , and p 3 ) which also had small values of t m a x , the truncated B&C procedure outperformed the BR-ILS in terms of the collected profit. However, for larger values of t m a x , the BR-ILS found the optimal solution in much shorter computation times, and even improved upon the objective function of the truncated B&C method whenever the latter did not yield optimal solutions. When considering all 139 instances, the truncated B&C procedure yielded 81 optimal solutions, whereas the BR-ILS obtained more than 75% of them. Moreover, with BR-ILS, we obtained 37 solutions that improved upon those provided by the truncated B&C procedure.
Table 5 presents the average results for the three set of instances (first column) considering the original formulation ( F 1 , Columns 2 to 4), the formulation with pre-processed nodes and arcs plus valid inequalities ( F 2 , Columns 4 to 6), and the results provided by our BR-ILS algorithm (Columns 7 and 8). In order to make the methods comparable, we have disregarded the three instances ( p . 1.2 b , p . 1.4 . d , and p . 1.4 . e ) in which the truncated B&C did not find feasible solutions. Note that, in general, F 2 outperformed F 1 . For instance, for sets p 1 and p 3 , F 2 yielded larger total collected profits, as well as smaller gaps, employing shorter computational times than F 1 . For the set of instances p 2 , both F 1 and F 2 yielded the optimal solutions (gap 0.00 ). However, F 2 achieved them in approximately one sixth of the computational time required by F 1 . Regarding the comparison of the truncated B&C for F 2 and BR-ILS, the latter yielded better profit results for instances p 1 and p 3 , whereas it provided similar results for p 2 instances ( 145.17 with BR-ILS, compared to 145.41 with the truncated B&C). This is due to the fact that p 2 instances were all solved to optimality with the truncated B&C and, therefore, the metaheuristic could only yield equal or worse results. In any case, for the set of instances p 2 , the difference in the objective function (profit) was less than 0.17 %, and BR-ILS required less than one tenth of the computation time employed by the truncated B&C. For p 1 and p 3 instances, BR-ILS achieved slightly better collected profits in much shorter computational times. The last row of Table 5 presents the average results obtained over the three sets of instances. In summary, F 2 outperformed F 1 in terms of profit, computational time, and gaps, whereas BR-ILS outperformed F 2 , yielding slightly better profits in much shorter computational times ( 5.01 s versus 1236.72 s). All in all, the average optimality gap for F 2 over a total of 139 instances was small ( 4.47 %), and the results provided by BR-ILS were quite similar or even better in some non-optimal instances, despite employing much shorter computational times. Hence, we can conclude that BR-ILS is an efficient metaheuristic for the deterministic TOP-PDR.

6. Solving the Stochastic and Position-Dependent TOP-PDR with Our Simheuristic

Since there are no benchmark instances in the literature for the stochastic version of this problem, we extended the previously used instancesfor the deterministic version. To extend the aforementioned instances, we assumed that the travel times followed a lognormal probability distribution. This probability distribution allowed us to model stochastic travel times, since these are always non-negative values. In the real-world, historical observations can be used to determine the specific parameters of the associated probability distribution. The lognormal distribution has two parameters: the location parameter, μ , and the scale parameter, σ , which are related to the expected value E [ T i j ] = t i j and the variance V a r [ T i j ] . In our experiments, the variance of the travel time associated with a pair of nodes was set as follows: V a r [ T i j ] = c · t i j , where c 0 represents an experimental parameter that can be employed to analyze scenarios with different degrees of variability. In our case, we considered three different levels of uncertainty: low ( c = 0.05 ), medium ( c = 0.15 ), and high ( c = 0.25 ).
For the scenario with a high level of uncertainty, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12 display the obtained results. The first column of these tables identifies the instance, whereas the next two columns report our best deterministic solutions (OBD) and the required computational times (in seconds). The OBS-D column shows the expected rewards obtained when the OBD was evaluated under stochastic conditions, with the corresponding level of uncertainty. To compute this column, we executed the algorithm disabling the simulation part (fast simulation process), and we applied the ‘intensive’ simulation process to the best deterministic solution. Subsequently, the next column shows the reliability associated with the OBD solution (i.e., the percentage of routes that were completed without violating the driving-range constraint in the stochastic environment). We considered that whenever a route incurred a failure (i.e., exceeded the maximum time threshold), the reward gathered in that route was lost (i.e., it amounted to zero).Similarly, the next three columns of the table show the expected reward obtained using the solution provided by our simheuristic approach (OBS-S), its associated reliability, and the associated computational time, respectively. Finally, the last two columns report the corresponding RPDs between OBD, and the stochastic solutions, i.e, OBS-D and OBS-S, respectively.
The obtained results showed that the OBS-S solutions provided by our simheuristic approach clearly outperformed the solutions for the deterministic version of the problem when these were tested in a scenario of uncertainty (OBS-D). On average, an improvement close to 13% was observed for OBS-S with respect to OBS-D. In order to assess our simheuristic in different stochastic scenarios, Figure 2 summarizes the average RPD of OBS-S with respect to the OBS-D obtained using different variance levels. Note that, for all sets, the OBS-S box-plot is always closer to the OBS-D value than the OBS-D box-plot. In the case of set p 7 , which contained the larger and more challenging instances, our simheuristic approach providing an average RPD close to 7.6 %, which can be contrasted with the 25.2 % reported for OBS-D. This confirms that the deterministic solutions performed optimally only when all the elements of the problem were deterministic, but they could easily become sub-optimal when used in scenarios under uncertainty.
Regarding reliability values, Figure 3 illustrates a comparison between OBS-D and OBS-S. In general, it can be observed that the solutions provided by our simheuristic outperformed the ones generated by the deterministic version in all the stochastic scenarios. This was due to route failures, which occurred during the execution stage and penalized the entire route. Note that, although the level of reliability decreased as the level of variance increased, OBS-S still provided a good level of reliability in scenarios with a high level of variability. On average, OBS-S offered a reliability of 62.3 %, whereas the one provided by OBS-D was only approximately 33.1 %. This justifies the processof integrating simulation methods during the searching process when dealing with stochastic optimization problems.

7. Conclusions

In this study, we analyzed both deterministic position-dependent and stochastic position-dependent versions of the TOP with position-dependent rewards. To solve the deterministic TOP-PDR, a mixed-integer programming formulation, F1, has been proposed, An enhanced formulation, F2 was then obtained by adding node and arc pre-processing, as well as a valid inequality constraint. The deterministic TOP-PDR was solved by employing these models in combination with a branch-and-cut algorithm provided by Gurobi (this procedure was truncated after one hour of computation). Furthermore, a biased-randomized iterated local search algorithm was proposed and utilized to solve the deterministic version. Out of 139 instances, which were adapted to consider position-dependent rewards, the F1 and F2 formulations solved using the truncated B&C algorithm yielded average optimality gaps of 7.84 % and 4.47 %, respectively. We observed that F2 outperformed F1 in terms of the collected rewards and also in terms of computational times. In addition, our BR-ILS algorithm obtained more than 75% of the optimal solutions found by F 2 in just a few seconds of computation, and improved the objective function (profit) for 37 out of the 58 solutions that were not optimally solved by the truncated B&C. On average, BR-ILS outperformed F2, yielding slightly better profit while employing much shorter computational times.
After validating the efficiency of BR-ILS in solving the deterministic TOP-PDR, we extended it into a simheuristic algorithm in order to deal with the stochastic and position-dependent version of the problem. MCS runs were conducted to obtain insights regarding how promising solutions perform under stochastic conditions, as well as to better guide the search process in scenarios under conditions of uncertainty. In order to diversify the search, an acceptance criterion based on simulated annealing was used and updated after each iteration. This was followed by a final refinement procedure. A series of numerical experiments were performed, assuming stochastic travel times and reward bonuses/penalties that depended upon the position of each customer in the associated route. In these experiments, different levels of uncertainty were considered (low, medium, and high variability). The results showed that the solutions provided by our simheuristic approach (OBS-S) clearly outperformed the solutions obtained for the deterministic version of the problem (OBS-D) when these were considered in stochastic scenarios. This was especially true as the level of uncertainty increased. Regarding reliability issues, as expected, the reliability of the best solutions tended to decrease as scenarios with higher levels of uncertainty were considered. Nevertheless, OBS-S outperformed OBS-D in all stochastic scenarios. On average, OBS-S provided approximately 62.3 % reliability, ahead of the 33.1 % provided by OBS-D. These results highlight the importance of integrating simulation methods during the searching process in stochastic optimization problems.
Regarding future work, several lines of research can be considered, such as (i) considering some of the reward bonuses/penalties as fuzzy valyes, therefore extending the problem to one with both stochastic and fuzzy uncertainty; (ii) employing similar approaches with other optimization problems, such as vehicle routing or the arc routing problems; and (iii) considering an even more position-dependent version of the problem (e.g., one similar to the one described in Arnau et al. [32] for the vehicle routing problem), in which frequent changes in the inputs force managers to re-optimize routing plans using an agile optimization approach.

Author Contributions

Conceptualization, A.A.J. and D.C.; methodology, E.B. and J.P.; software, J.P.; validation, E.B. and D.C.; writing—original draft preparation, J.P., A.A.J., E.B. and D.C.; writing—review and editing, J.P., A.A.J., E.B. and D.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been partially supported by the Spanish Ministry of Science (PID2019-111100RB-C21/AEI/10.13039/501100011033, PID2019-104263RB-C41, PID2019-106205GB-I00), and by the Universidad de Sevilla and the Junta de Andalucia (grant number US-1381656, financed with FEDER funds).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
BRbiased-randomized
BR-ILSbiased-randomized iterated local search
B&Cbranch-and-cut
F 1 original formulation presented in Section 3.2
F 2 adjusted formulation presented in Section 5
ILSiterated local search
MCSMonte Carlo simulation
OBDour best deterministic solution
OBS-Dour best deterministic solution evaluated under stochastic conditions
OBS-Sour best stochastic solution
OPorienteering problem
RPDrelative percentage deviation
STOP-PDRstochastic team orienteering problem with position-dependent rewards
TOPteam orienteering problem
TOP-PDRteam orienteering problem with position-dependent rewards
V I valid inequality
VNSvariable neighbourhood search

References

  1. Chao, I.M.; Golden, B.; Wasil, E. The team orienteering problem. Eur. J. Oper. Res. 1996, 88, 464–474. [Google Scholar] [CrossRef]
  2. Panadero, J.; Currie, C.; Juan, A.A.; Bayliss, C. Maximizing Reward from a Team of Surveillance Drones under Uncertainty Conditions: A simheuristic approach. Eur. J. Ind. Eng. 2020, 14, 1–23. [Google Scholar] [CrossRef]
  3. Bayliss, C.; Juan, A.A.; Currie, C.S.; Panadero, J. A learnheuristic approach for the team orienteering problem with aerial drone motion constraints. Appl. Soft Comput. 2020, 92, 106280. [Google Scholar] [CrossRef]
  4. Estrada-Moreno, A.; Ferrer, A.; Juan, A.A.; Panadero, J.; Bagirov, A. The non-smooth and bi-objective team orienteering problem with soft constraints. Mathematics 2020, 8, 1461. [Google Scholar] [CrossRef]
  5. Herrera, E.; Panadero, J.; Juan, A.A.; Neroni, M.; Bertolini, M. Last-Mile Delivery of Pharmaceutical Items to Heterogeneous Healthcare Centers with Random Travel Times and Unpunctuality Fees. In Proceedings of the 2021 Winter Simulation Conference (WSC), Phoenix, AZ, USA, 12–15 December 2021; pp. 1–12. [Google Scholar]
  6. Gruler, A.; Quintero-Araújo, C.L.; Calvet, L.; Juan, A.A. Waste collection under uncertainty: A simheuristic based on variable neighbourhood search. Eur. J. Ind. Eng. 2017, 11, 228–255. [Google Scholar] [CrossRef]
  7. Hatami, S.; Calvet, L.; Fernandez-Viagas, V.; Framinan, J.M.; Juan, A.A. A simheuristic algorithm to set up starting times in the stochastic parallel flowshop problem. Simul. Model. Pract. Theory 2018, 86, 55–71. [Google Scholar] [CrossRef]
  8. Ferrer, A.; Guimarans, D.; Ramalhinho, H.; Juan, A.A. A BRILS metaheuristic for non-smooth flow-shop problems with failure-risk costs. Expert Syst. Appl. 2016, 44, 177–186. [Google Scholar] [CrossRef] [Green Version]
  9. Ferone, D.; Hatami, S.; González-Neira, E.M.; Juan, A.A.; Festa, P. A biased-randomized iterated local search for the distributed assembly permutation flow-shop problem. Int. Trans. Oper. Res. 2020, 27, 1368–1391. [Google Scholar] [CrossRef]
  10. Golden, B.; Levy, L.; Vohra, R. The orienteering problem. Nav. Res. Logist. 1987, 34, 307–318. [Google Scholar] [CrossRef]
  11. Keshtkaran, M.; Ziarati, K.; Bettinelli, A.; Vigo, D. Enhanced exact solution methods for the team orienteering problem. Int. J. Prod. Res. 2016, 54, 591–601. [Google Scholar] [CrossRef]
  12. Archetti, C.; Hertz, A.; Speranza, M.G. Metaheuristics for the team orienteering problem. J. Heuristics 2007, 13, 49–76. [Google Scholar] [CrossRef]
  13. Dang, D.C.; Guibadj, R.N.; Moukrim, A. An effective PSO-inspired algorithm for the team orienteering problem. Eur. J. Oper. Res. 2013, 229, 332–344. [Google Scholar] [CrossRef] [Green Version]
  14. Lin, S. Solving the team orienteering problem using effective multi-start simulated annealing. Appl. Soft Comput. 2013, 13, 1064–1073. [Google Scholar] [CrossRef]
  15. Ferreira, J.; Quintas, A.; Oliveira, J. Solving the team orienteering problem: Developing a solution tool using a genetic algorithm approach. In Soft Computing in Industrial Applications. Advances in Intelligent Systems and Computing: 223; Springer: Cham, Switzerland, 2014; pp. 365–375. [Google Scholar]
  16. Mei, Y.; Zhang, M. Genetic programming hyper-heuristic for stochastic team orienteering problem with time windows. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
  17. Song, Y.; Ulmer, M.W.; Thomas, B.W.; Wallace, S.W. Building Trust in Home Services—Stochastic Team-Orienteering with Consistency Constraints. Transp. Sci. 2020, 54, 823–838. [Google Scholar] [CrossRef]
  18. Gunawan, A.; Lau, H.C.; Vansteenwegen, P. Orienteering problem: A survey of recent variants, solution approaches and applications. Eur. J. Oper. Res. 2016, 255, 315–332. [Google Scholar] [CrossRef]
  19. Bian, Z.; Liu, X. A real-time adjustment strategy for the operational level stochastic orienteering problem: A simulation-aided optimization approach. Transp. Res. Part E Logist. Transp. Rev. 2018, 115, 246–266. [Google Scholar] [CrossRef]
  20. Dolinskaya, I.; Shi, Z.E.; Smilowitz, K. Adaptive orienteering problem with stochastic travel times. Transp. Res. Part E Logist. Transp. Rev. 2018, 109, 1–19. [Google Scholar] [CrossRef]
  21. Thayer, T.C.; Carpin, S. An Adaptive Method for the Stochastic Orienteering Problem. IEEE Robot. Autom. Lett. 2021, 6, 4185–4192. [Google Scholar] [CrossRef]
  22. Thayer, T.C.; Carpin, S. A Resolution Adaptive Algorithm for the Stochastic Orienteering Problem with Chance Constraints. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 6411–6418. [Google Scholar] [CrossRef]
  23. Reyes-Rubiano, L.; Juan, A.; Bayliss, C.; Panadero, J.; Faulin, J.; Copado, P. A Biased-Randomized Learnheuristic for Solving the Team Orienteering Problem with Dynamic Rewards. Transp. Res. Procedia 2020, 47, 680–687. [Google Scholar] [CrossRef]
  24. Gavalas, D.; Konstantopoulos, C.; Mastakas, K.; Pantziou, G.; Vathis, N. Heuristics for the time dependent team orienteering problem: Application to tourist route planning. Comput. Oper. Res. 2015, 62, 36–50. [Google Scholar] [CrossRef]
  25. Yu, V.F.; Jewpanya, P.; Lin, S.W.; Redi, A.P. Team orienteering problem with time windows and time-dependent scores. Comput. Ind. Eng. 2019, 127, 213–224. [Google Scholar] [CrossRef]
  26. Miller, C.E.; Tucker, A.W.; Zemlin, R.A. Integer Programming Formulation of Traveling Salesman Problems. J. ACM 1960, 7, 326–329. [Google Scholar] [CrossRef]
  27. Varakantham, P.; Kumar, A.; Lau, H.C.; Yeoh, W. Risk-Sensitive Stochastic Orienteering Problems for Trip Optimization in Urban Environments. ACM Trans. Intell. Syst. Technol. 2018, 9, 1–25. [Google Scholar] [CrossRef]
  28. Rabe, M.; Deininger, M.; Juan, A.A. Speeding up computational times in simheuristics combining genetic algorithms with discrete-event simulation. Simul. Model. Pract. Theory 2020, 103, 102089. [Google Scholar] [CrossRef]
  29. Reyes-Rubiano, L.; Ferone, D.; Juan, A.A.; Faulin, J. A simheuristic for routing electric vehicles with limited driving ranges and stochastic travel times. SORT 2019, 1, 3–24. [Google Scholar]
  30. El-Hajj, R.; Dang, D.C.; Moukrim, A. Solving the Team Orienteering Problem with Cutting Planes. Comput. Oper. Res. 2016, 74, 21–30. [Google Scholar] [CrossRef] [Green Version]
  31. Bianchessi, N.; Mansini, R.; Speranza, M.G. A branch-and-cut algorithm for the Team Orienteering Problem. Int. Trans. Oper. Res. 2018, 25, 627–635. [Google Scholar] [CrossRef]
  32. Arnau, Q.; Barrena, E.; Panadero, J.; de la Torre, R.; Juan, A.A. A biased-randomized discrete-event heuristic for coordinated multi-vehicle container transport across interconnected networks. Eur. J. Oper. Res. 2022, 302, 348–362. [Google Scholar] [CrossRef]
Figure 1. An illustrative example of the stochastic and position-dependent TOP.
Figure 1. An illustrative example of the stochastic and position-dependent TOP.
Mathematics 10 02856 g001
Figure 2. Comparison between OBD solutions with respect to stochastic solutions.
Figure 2. Comparison between OBD solutions with respect to stochastic solutions.
Mathematics 10 02856 g002
Figure 3. Comparison between the reliability of the stochastic solutions.
Figure 3. Comparison between the reliability of the stochastic solutions.
Mathematics 10 02856 g003
Table 2. Results given by the BR-ILS and the truncated B&C ( F 2 ) models for p 1 instances.
Table 2. Results given by the BR-ILS and the truncated B&C ( F 2 ) models for p 1 instances.
BR-ILSTruncated B&CBR-ILS vs.
Trunc. B&C
InstanceProfitTime (s)ProfitTime (s)Gap (%)RPD
p1.2.b15.000.86NO SOL
p1.2.c20.250.7520.250.020.000.00
p1.2.d30.500.7430.500.050.000.00
p1.2.e45.500.8745.502.280.000.00
p1.2.f80.500.6080.507.380.000.00
p1.2.g90.500.6390.50312.000.000.00
p1.2.h110.500.49110.502156.230.000.00
p1.2.i130.750.43130.503600.1622.410.19
p1.2.j175.500.42155.503600.078.3612.86
p1.2.k175.500.49175.253600.1011.410.14
p1.2.l195.500.31190.503600.1510.632.62
p1.2.m215.500.53215.503600.061.040.00
p1.2.n235.500.34235.50585.320.000.00
p1.2.o240.500.74240.503600.081.660.00
p1.2.p250.250.45250.252968.520.100.00
p1.2.q265.500.39265.50704.360.090.00
p1.2.r280.250.37280.25795.310.090.00
p1.3.c15.000.8615.000.000.000.00
p1.3.d15.001.0015.000.000.000.00
p1.3.e30.250.7030.250.000.000.00
p1.3.f40.500.4340.500.080.000.00
p1.3.g50.750.6850.751.160.000.00
p1.3.h71.000.5471.0022.020.000.00
p1.3.i105.500.42105.50395.800.000.00
p1.3.j115.500.36111.003600.0714.644.05
p1.3.k135.750.51135.753600.1618.780.00
p1.3.l155.750.44145.753600.3425.046.86
p1.3.m175.750.46170.753600.1425.182.93
p1.3.n190.750.53185.003600.2420.543.11
p1.3.o205.750.30205.753600.0911.420.00
p1.3.p220.750.46220.503600.0811.560.11
p1.3.q230.750.30225.753600.1214.172.21
p1.3.r250.750.52235.503600.1715.296.48
p1.4.d15.000.86NO SOL
p1.4.e15.001.00NO SOL
p1.4.f25.250.7425.250.010.000.00
p1.4.g35.250.9835.250.020.000.00
p1.4.h45.500.4845.500.010.000.00
p1.4.i60.250.5860.252.070.000.00
p1.4.j75.750.3975.7523.760.000.00
p1.4.k100.750.43101.0032.820.00−0.25
p1.4.l121.000.60121.001756.800.000.00
p1.4.m131.250.40130.753600.1634.800.38
p1.4.n155.750.33155.753600.1222.310.00
p1.4.o166.000.42161.003600.1725.003.11
p1.4.p176.000.30176.003600.1318.610.00
p1.4.q191.000.52190.003600.0818.550.53
p1.4.r210.750.39205.503600.0821.532.55
Table 3. Results given by the BR-ILS and the truncated B&C ( F 2 ) models for p 2 instances.
Table 3. Results given by the BR-ILS and the truncated B&C ( F 2 ) models for p 2 instances.
BR-ILSTruncated B&CBR-ILS vs.
Trunc. B&C
InstanceProfitTime (s)ProfitTime (s)Gap (%)RPD
p2.2.a89.750.6589.750.190.000.00
p2.2.b121.000.61121.000.090.000.00
p2.2.c140.000.70140.000.630.000.00
p2.2.d160.250.45160.252.140.000.00
p2.2.e189.500.46189.502.550.000.00
p2.2.f200.750.73201.250.250.00−0.25
p2.2.g201.250.80201.250.220.000.00
p2.2.h230.750.69231.250.230.00−0.22
p2.2.i231.750.73231.750.080.000.00
p2.2.j259.750.50259.753.920.000.00
p2.2.k274.750.54274.7570.420.000.00
p2.3.a70.250.1770.250.020.000.00
p2.3.b70.500.1471.000.000.00−0.70
p2.3.c105.250.48105.750.080.00−0.47
p2.3.d106.250.64106.500.020.00−0.23
p2.3.e121.000.57121.500.020.00−0.41
p2.3.f121.500.86121.500.020.000.00
p2.3.g145.000.26145.001.700.000.00
p2.3.h165.250.46165.507.640.00−0.15
p2.3.i200.750.21201.004.990.00−0.12
p2.3.j201.500.42201.750.360.00−0.12
p2.3.k201.750.90201.750.090.000.00
p2.4.b70.250.5070.250.090.000.00
p2.4.c70.500.9871.000.000.00−0.70
p2.4.d70.500.9971.000.000.00−0.70
p2.4.e70.251.0071.000.020.00−1.06
p2.4.f105.750.27106.250.020.00−0.47
p2.4.g105.750.82106.500.030.00−0.70
p2.4.h120.750.67121.500.020.00−0.62
p2.4.i121.500.99121.500.020.000.00
p2.4.j121.000.96121.500.020.00−0.41
p2.4.k180.750.35180.753.530.000.00
Table 4. Results given by the BR-ILS and the truncated B&C ( F 2 ) models for p 3 instances.
Table 4. Results given by the BR-ILS and the truncated B&C ( F 2 ) models for p 3 instances.
BR-ILSTruncated B&CBR-ILS vs.
Trunc. B&C
InstanceProfitTime (s)ProfitTime (s)Gap (%)RPD
p.3.2.a92.000.0892.000.000.000.00
p.3.2.b153.008.54153.001.000.000.00
p.3.2.c183.000.30183.003.550.000.00
p.3.2.d220.500.00220.504.690.000.00
p.3.2.e261.500.59261.5089.010.000.00
p.3.2.f302.002.35302.00468.060.000.00
p.3.2.g362.001.34362.00921.690.000.00
p.3.2.h409.505.66409.50791.500.000.00
p.3.2.i462.0021.27462.001008.810.000.00
p.3.2.j512.005.98510.003600.096.370.39
p.3.2.k552.004.08552.003600.104.620.00
p.3.2.l590.001.14590.003600.050.680.00
p.3.2.m622.0044.75622.003600.061.690.00
p.3.2.n662.0045.15662.003600.080.760.00
p.3.2.o690.000.83690.003600.081.450.00
p.3.2.p720.0036.71712.003600.193.441.12
p.3.2.q760.001.21760.00853.510.070.00
p.3.2.r790.004.70790.00416.140.060.00
p.3.2.s802.000.09802.0073.830.060.00
p.3.2.t803.0042.22804.0017.680.00−0.12
p.3.3.a30.000.0030.000.000.000.00
p.3.3.b92.000.0092.000.020.000.00
p.3.3.c123.000.00123.000.060.000.00
p.3.3.d173.500.90173.501.920.000.00
p.3.3.e203.000.00203.005.800.000.00
p.3.3.f233.0057.57233.5071.300.00−0.21
p.3.3.g271.000.06271.0063.150.000.00
p.3.3.h304.5032.73304.50489.770.000.00
p.3.3.i334.0036.19334.503600.0611.51−0.15
p.3.3.j383.006.05381.503600.126.420.39
p.3.3.k443.5053.98432.503600.2219.082.54
p.3.3.l482.0024.76473.003600.1915.431.90
p.3.3.m522.0046.08473.003600.0833.9310.36
p.3.3.n572.0054.34550.003600.0818.644.00
p.3.3.o592.0046.84549.003600.1424.687.83
p.3.3.p643.001.52634.003600.077.331.42
p.3.3.q681.500.91681.503600.154.550.00
p.3.3.r712.507.43712.502546.670.070.00
p.3.3.s722.0020.91714.503600.177.911.05
p.3.3.t762.0017.72742.503600.138.082.63
p.3.4.b30.000.0030.000.000.000.00
p.3.4.c92.000.0092.000.020.000.00
p.3.4.d102.000.00102.000.010.000.00
p.3.4.e143.500.00143.500.340.000.00
p.3.4.f193.500.60193.504.950.000.00
p.3.4.g223.000.00223.004.800.000.00
p.3.4.h244.0017.88244.00100.790.000.00
p.3.4.i272.0010.30272.00222.810.000.00
p.3.4.j311.000.09311.00406.170.000.00
p.3.4.k352.500.10352.50844.740.000.00
p.3.4.l384.5057.75384.503600.1213.130.00
p.3.4.m393.0033.48391.503600.3725.030.38
p.3.4.n444.0012.06432.503600.2124.512.66
p.3.4.o500.509.77485.503600.0921.943.09
p.3.4.p561.001.00511.503600.0823.669.68
p.3.4.q564.503.98565.503600.0712.20−0.18
p.3.4.r603.004.81575.503600.1917.204.78
p.3.4.s669.0018.16595.003600.0513.5312.44
p.3.4.t674.0014.20675.003600.040.15−0.15
Table 5. Average results obtained for p 1 , p 2 , and p 3 instances.
Table 5. Average results obtained for p 1 , p 2 , and p 3 instances.
F1 (Truncated B&C)F2 (Truncated B&C)BR-ILS
InstanceProfitTime (s)Gap (%)ProfitTime (s)Gap (%)ProfitTime (s)
p1132.332133.8812.83132.531897.087.85134.370.52
p2145.4138.330.00145.416.020.00145.170.61
p3413.712111.4410.69417.431807.055.56423.4713.89
Avg:230.481427.897.84231.791236.724.47234.345.01
Table 6. Results for p 1 instances in a stochastic scenario with a high variance ( c = 0.25 ).
Table 6. Results for p 1 instances in a stochastic scenario with a high variance ( c = 0.25 ).
Deterministic ScenarioStochastic ScenarioGAP (%)
InstanceOBD
(1)
Time
(s)
OBS-D
(2)
Reliability
(3)
OBS-S
(4)
Reliability
(5)
Time
(s)
(1–2)(1–4)
p1.2.d30.500.7424.360.6224.490.6332.6220.13%19.70%
p1.2.e45.500.8734.890.6838.480.746.2523.32%15.43%
p1.2.f80.500.6054.450.4459.070.635.0632.36%26.62%
p1.2.g90.500.6363.370.4575.170.794.1229.98%16.94%
p1.2.h110.500.4967.040.3785.410.668.0639.33%22.71%
p1.2.i130.750.4380.660.37102.960.6415.2138.31%21.25%
p1.2.j175.500.4293.620.37101.280.526.5146.66%42.29%
p1.2.k175.500.49101.850.33126.940.582.0841.97%27.67%
p1.2.l195.500.31108.990.32152.070.7343.7744.25%22.21%
p1.2.m215.500.53134.750.39168.300.716.1737.47%21.90%
p1.2.n235.500.34126.740.29163.580.5513.2446.18%30.54%
p1.2.o240.500.74174.580.54203.060.808.7727.41%15.57%
p1.2.p250.250.45142.450.33207.870.791.4443.08%16.94%
p1.2.q265.500.39152.350.33216.240.7410.3542.62%18.55%
p1.2.r280.250.37151.330.29232.440.7917.7946.00%17.06%
p1.3.c15.000.8612.350.6212.380.621.6417.67%17.47%
p1.3.d15.001.0014.910.9914.940.991.820.60%0.40%
p1.3.e30.250.7023.790.5324.750.522.4621.36%18.18%
p1.3.f40.500.4330.060.3530.350.371.7225.78%25.06%
p1.3.g50.750.6839.510.4640.310.510.7222.15%20.57%
p1.3.h71.000.5450.550.3556.710.510.6228.80%20.13%
p1.3.i105.500.4274.790.3284.270.6034.7229.11%20.12%
p1.3.j115.500.3671.850.2597.930.770.9237.79%15.21%
p1.3.k135.750.5189.410.29100.180.5347.4734.14%26.20%
p1.3.l155.750.44100.100.27121.680.660.8235.73%21.87%
p1.3.m175.750.46116.410.29140.000.6145.3633.76%20.34%
p1.3.n190.750.53140.400.39147.470.6616.3226.40%22.69%
p1.3.o205.750.30122.130.22159.450.620.5940.64%22.50%
p1.3.p220.750.46143.050.28164.190.5121.9535.20%25.62%
p1.3.q230.750.30168.810.35200.580.7735.7826.84%13.07%
p1.3.r250.750.52159.660.26193.680.590.6136.33%22.76%
p1.4.d15.000.8610.480.6312.330.613.8330.13%17.80%
p1.4.e15.001.0013.790.9714.780.974.308.07%1.47%
p1.4.f25.250.7418.280.4319.120.436.8327.60%24.28%
p1.4.g35.250.9831.690.6032.720.686.8110.10%7.18%
p1.4.h45.500.4834.800.3338.430.447.0323.52%15.54%
p1.4.i60.250.5844.020.2746.980.381.1226.94%22.02%
p1.4.j75.750.3955.300.2556.950.398.4627.00%24.82%
p1.4.k100.750.4372.350.2677.670.5025.4228.19%22.91%
p1.4.l121.000.6093.240.3799.260.463.2922.94%17.97%
p1.4.m131.250.4092.220.24111.490.572.0329.74%15.06%
p1.4.n155.750.33111.070.23126.510.6038.5828.69%18.77%
p1.4.o166.000.42110.470.21124.000.522.2633.45%25.30%
p1.4.p176.000.30114.490.18134.820.444.2334.95%23.40%
p1.4.q191.000.52139.550.30151.820.520.8426.94%20.51%
Average:129.900.5484.690.39102.070.6111.3330.44%20.10%
Table 7. Results for p 2 instances in a stochastic scenario with a high variance ( c = 0.25 ).
Table 7. Results for p 2 instances in a stochastic scenario with a high variance ( c = 0.25 ).
Deterministic ScenarioStochastic ScenarioGAPS (%)
InstanceOBD
(1)
Time
(s)
OBS-D
(2)
Reliability
(3)
OBS-S
(4)
Reliability
(5)
Time
(s)
(1–2)(1–4)
p2.2.a89.750.6562.460.4966.060.542.4330.41%26.40%
p2.2.b121.000.6179.330.45103.580.702.3534.44%14.40%
p2.2.c140.000.70103.030.52106.120.586.5026.41%24.20%
p2.2.d160.250.4596.290.37115.830.523.1939.91%27.72%
p2.2.e189.500.46114.500.36137.070.565.5339.58%27.67%
p2.2.f200.750.73150.610.57173.220.752.9024.98%13.71%
p2.2.g201.250.80124.860.36193.200.933.0437.96%4.00%
p2.2.h230.750.69156.180.46186.140.6220.3332.32%19.33%
p2.2.i231.750.73165.500.55215.460.8712.1428.59%7.03%
p2.2.j259.750.50164.260.40207.770.631.0636.76%20.01%
p2.2.k274.750.54178.690.41246.100.911.4634.96%10.43%
p2.3.a70.250.1750.970.3452.600.3940.7327.44%25.12%
p2.3.b70.500.1458.730.8468.860.956.6216.70%2.33%
p2.3.c105.250.4872.570.3786.320.4827.4331.05%17.99%
p2.3.d106.250.6484.050.4998.370.783.1020.89%7.42%
p2.3.e121.000.5796.740.54104.050.696.1520.05%14.01%
p2.3.f121.500.8698.980.53117.590.937.1418.53%3.22%
p2.3.g145.000.2691.020.22128.530.6639.0437.23%11.36%
p2.3.h165.250.46110.440.30132.070.6210.2033.17%20.08%
p2.3.i200.750.21117.310.20147.770.330.2341.56%26.39%
p2.3.j201.500.42149.950.44164.230.5516.4425.58%18.50%
p2.3.k201.750.90170.240.71190.680.871.0715.62%5.49%
p2.4.b70.250.5051.190.3558.820.496.0627.13%16.27%
p2.4.c70.500.9856.950.5963.990.704.0719.22%9.23%
p2.4.d70.500.9957.530.7958.580.891.8418.40%16.91%
p2.4.e70.251.0059.190.9462.010.951.9215.74%11.73%
p2.4.f105.750.2765.100.2189.960.431.3738.44%14.93%
p2.4.g105.750.8287.940.5098.370.6948.8416.84%6.98%
p2.4.h120.750.67101.870.49108.260.5922.4715.64%10.34%
p2.4.i121.500.9995.550.54116.140.873.0421.36%4.41%
p2.4.j121.000.96102.680.82108.590.954.7315.14%10.26%
p2.4.k180.750.35118.530.19128.940.378.6434.42%28.66%
Average:145.170.61102.910.48122.980.6810.0627.39%14.89%
Table 8. Results for p 3 instances in a stochastic scenario with a high variance ( c = 0.25 ).
Table 8. Results for p 3 instances in a stochastic scenario with a high variance ( c = 0.25 ).
Deterministic ScenarioStochastic ScenarioGAPS (%)
InstanceOBD
(1)
Time
(s)
OBS-D
(2)
Reliability
(3)
OBS-S
(4)
Reliability
(5)
Time
(s)
(1–2)(1–4)
p.3.2.a92.000.0862.650.5179.520.841.0331.90%13.57%
p.3.2.b153.008.54116.730.60120.620.6416.2023.71%21.16%
p.3.2.c183.000.30149.950.66168.250.884.0718.06%8.06%
p.3.2.d220.500.00149.850.51180.270.716.8732.04%18.24%
p.3.2.e261.500.59139.390.29232.640.840.9546.70%11.04%
p.3.2.f302.002.35167.250.31260.240.8113.0244.62%13.83%
p.3.2.g362.001.34197.520.31290.960.741.5345.44%19.62%
p.3.2.h409.505.66229.920.32340.510.785.9443.85%16.85%
p.3.2.i462.0021.27254.980.32375.650.7828.2344.81%18.69%
p.3.2.j512.005.98287.590.32397.440.7322.7143.83%22.38%
p.3.2.k552.004.08345.300.40420.760.6012.1037.45%23.78%
p.3.2.l590.001.14339.560.35495.650.8311.1642.45%15.99%
p.3.2.m622.0044.75328.000.28547.350.8645.0247.27%12.00%
p.3.2.n662.0045.15322.400.22593.910.8549.2451.30%10.29%
p.3.2.o690.000.83408.480.37618.660.904.7740.80%10.34%
p.3.2.p720.0036.71497.160.44637.880.8937.1430.95%11.41%
p.3.2.q760.001.21460.920.36669.490.7453.1939.35%11.91%
p.3.2.r790.004.70499.220.38672.190.9157.0136.81%14.91%
p.3.2.s802.000.09606.350.58680.500.731.5524.40%15.15%
p.3.2.t803.0042.22531.250.42715.700.8345.2433.84%10.87%
p.3.3.a30.000.0020.370.6825.050.750.9432.10%16.50%
p.3.3.b92.000.0083.700.7885.350.780.419.02%7.23%
p.3.3.c123.000.0099.340.53102.860.570.7019.24%16.37%
p.3.3.d173.500.90136.080.52138.400.5228.7721.57%20.23%
p.3.3.e203.000.00156.370.52172.180.760.2322.97%15.18%
p.3.3.f233.0057.57153.310.28200.090.5866.7834.20%14.12%
p.3.3.g271.000.06184.210.36215.550.660.6732.03%20.46%
p.3.3.h304.5032.73234.680.36259.550.5445.6922.93%14.76%
p.3.3.i334.0036.19265.440.51301.880.8161.8420.53%9.62%
p.3.3.j383.006.05267.480.29337.700.7813.5830.16%11.83%
p.3.3.k443.5053.98242.120.18347.330.4762.8345.41%21.68%
p.3.3.l482.0024.76331.660.34405.860.5928.8331.19%15.80%
p.3.3.m522.0046.08325.820.24451.620.8174.8737.58%13.48%
p.3.3.n572.0054.34337.690.20483.770.6957.4540.96%15.42%
p.3.3.o592.0046.84404.230.32541.190.8556.8631.72%8.58%
p.3.3.p643.001.52416.060.24540.230.7235.0635.29%15.98%
p.3.3.q681.500.91396.120.22578.710.741.8941.88%15.08%
p.3.3.r712.507.43403.430.19609.890.7853.6243.38%14.40%
p.3.3.s722.0020.91510.300.34658.670.7549.3629.32%8.77%
p.3.3.t762.0017.72459.420.23687.650.9165.2939.71%9.76%
p.3.4.b30.000.0019.740.6625.380.778.4534.20%15.40%
p.3.4.c92.000.0075.350.5076.040.522.5918.10%17.35%
p.3.4.d102.000.0095.600.6496.370.6443.986.27%5.52%
p.3.4.e143.500.00114.400.37122.450.451.7220.28%14.67%
p.3.4.f193.500.60149.310.40154.740.470.9622.84%20.03%
p.3.4.g223.000.00140.680.20178.050.580.8736.91%20.16%
p.3.4.h244.0017.88190.160.33211.630.4338.7422.07%13.27%
p.3.4.i272.0010.30159.090.12235.340.8124.9841.51%13.48%
p.3.4.j311.000.09215.560.23254.210.4610.3730.69%18.26%
p.3.4.k352.500.10246.270.20280.150.303.1130.14%20.52%
p.3.4.l384.5057.75256.680.20319.510.5094.1233.24%16.90%
p.3.4.m393.0033.48252.580.17368.560.8243.5835.73%6.22%
p.3.4.n444.0012.06288.790.15391.500.8239.7134.96%11.82%
p.3.4.o500.509.77334.410.21427.460.6315.4033.18%14.59%
p.3.4.p561.001.00363.060.18462.520.587.9835.28%17.55%
p.3.4.q564.503.98404.650.29490.500.6747.8428.32%13.11%
p.3.4.r603.004.81350.290.12538.680.8520.9541.91%10.67%
p.3.4.s669.0018.16395.710.11505.180.4027.1440.85%24.49%
p.3.4.t674.0014.20402.780.14577.170.5321.6440.24%14.37%
Average:423.4713.89270.800.35361.990.7026.7233.35%14.81%
Table 9. Results for p 4 instances in a stochastic scenario with a high variance ( c = 0.25 ).
Table 9. Results for p 4 instances in a stochastic scenario with a high variance ( c = 0.25 ).
Deterministic ScenarioStochastic ScenarioGAPS (%)
InstanceOBD
(1)
Time
(s)
OBS-D
(2)
Reliability
(3)
OBS-S
(4)
Reliability
(5)
Time
(s)
(1–2)(1–4)
p4.2.a206.352.51121.950.35142.670.6440.1740.90%30.86%
p4.2.b342.352.04184.960.29258.380.7154.6445.97%24.53%
p4.2.c449.3510.85238.920.28332.490.58210.6946.83%26.01%
p4.2.d532.25386.22310.450.34415.160.73541.0941.67%22.00%
p4.2.e618.3545.77354.610.33440.990.5763.6942.65%28.68%
p4.2.f687.40547.96364.280.28504.550.68559.8047.01%26.60%
p4.2.g747.05518.38413.340.30583.380.71530.5044.67%21.91%
p4.2.h824.40472.07430.670.24529.190.50452.4647.76%35.81%
p4.2.i915.6046.22484.160.29587.390.47126.6447.12%35.85%
p4.2.j944.25479.72539.380.30617.540.54530.6842.88%34.60%
p4.2.k1014.40354.50624.140.36659.040.49410.0538.47%35.03%
p4.2.l1041.75369.46493.790.18785.390.63458.7452.60%24.61%
p4.2.m1109.45200.05567.280.22716.650.43251.9548.87%35.40%
p4.2.n1145.40336.26602.190.24804.570.49441.5147.43%29.76%
p4.2.o1151.25326.57633.940.30845.510.57375.1944.93%26.56%
p4.2.p1204.5542.16684.100.32786.960.46410.8343.21%34.67%
p4.2.q1220.9083.79710.250.34856.040.49215.4241.83%29.88%
p4.2.r1246.45165.92757.830.37861.530.56228.3539.20%30.88%
p4.2.s1262.50124.60839.050.42843.190.48127.6533.54%33.21%
p4.2.t1285.35172.90690.050.29911.490.55232.5346.31%29.09%
p4.3.b38.500.0020.810.3222.640.460.0045.95%41.19%
p4.3.c194.400.09114.500.21149.370.524.2841.10%23.16%
p4.3.d334.90449.67176.780.16250.910.54524.7047.21%25.08%
p4.3.e469.75373.92285.380.22352.140.49467.7439.25%25.04%
p4.3.f580.25577.16308.910.15420.430.58591.8046.76%27.54%
p4.3.g647.35165.25359.310.17397.260.31200.8644.50%38.63%
p4.3.h722.90357.78395.160.16531.470.41499.4645.34%26.48%
p4.3.i806.70595.52446.820.17576.540.46553.6544.61%28.53%
p4.3.j853.0042.25507.040.21625.200.47427.4840.56%26.71%
p4.3.k915.90232.85479.710.12652.150.47310.8647.62%28.80%
p4.3.l964.70115.28530.030.17675.420.45104.9445.06%29.99%
p4.3.m1042.50440.55548.240.14726.830.42593.1847.41%30.28%
p4.3.n1114.10470.48616.460.16785.670.52536.3144.67%29.48%
p4.3.o1153.25300.51614.790.14740.520.38301.9546.69%35.79%
p4.3.p1198.60519.83725.220.21802.740.32553.4439.49%33.03%
p4.3.q1225.8599.60824.070.30908.050.45373.2132.78%25.92%
p4.3.r1249.35246.66811.360.28896.920.44341.9635.06%28.21%
p4.3.s1267.25258.12739.010.20920.240.45421.3741.68%27.38%
p4.3.t1287.60160.71807.240.25909.290.44263.5037.31%29.38%
p4.4.d38.500.0020.320.3020.350.301.4047.22%47.14%
p4.4.e185.500.06115.680.13137.980.404.2337.64%25.62%
p4.4.f325.5596.42188.710.11238.180.42132.4542.03%26.84%
p4.4.g462.80123.79256.830.10311.230.24204.0144.51%32.75%
p4.4.h572.55500.74323.680.11396.850.33555.2543.47%30.69%
p4.4.i658.10399.88364.430.10452.220.36426.5844.62%31.28%
p4.4.j731.90267.88443.190.14511.100.40351.1539.45%30.17%
p4.4.k822.3050.95435.790.04557.940.33255.3847.00%32.15%
p4.4.l878.50454.71458.450.10617.260.37520.7847.81%29.74%
p4.4.m906.65146.87541.910.12658.960.35181.8640.23%27.32%
p4.4.n965.40461.19551.100.11699.710.35465.3942.91%27.52%
p4.4.o1056.30497.32578.270.08742.520.37502.8545.26%29.71%
p4.4.p1115.00109.33591.180.08727.260.20120.4946.98%34.77%
p4.4.q1147.00385.90623.080.09836.700.41449.1145.68%27.05%
p4.4.r1192.95128.49694.080.12804.690.26260.4641.82%32.55%
p4.4.s1236.35399.35663.760.08894.790.34433.2546.31%27.63%
p4.4.t1276.10288.25714.580.10923.100.38318.9044.00%27.66%
Average:849.78257.24480.740.21595.660.46330.6643.64%29.95%
Table 10. Results for p 5 instances in a stochastic scenario with a high variance ( c = 0.25 ).
Table 10. Results for p 5 instances in a stochastic scenario with a high variance ( c = 0.25 ).
Deterministic ScenarioStochastic ScenarioGAPS (%)
InstanceOBD
(1)
Time
(s)
OBS-D
(2)
Reliability
(3)
OBS-S
(4)
Reliability
(5)
Time
(s)
(1–2)(1–4)
p5.2.b20.000.0016.580.6916.800.710.8117.10%16.00%
p5.2.c50.000.0034.200.4735.380.6315.7531.60%29.24%
p5.2.d80.500.0061.400.5675.000.882.7023.73%6.83%
p5.2.e180.000.00107.910.36109.710.3723.4040.05%39.05%
p5.2.f241.00340.89116.120.18152.430.53390.3851.82%36.75%
p5.2.g320.0025.91261.600.67168.720.69117.0518.25%47.28%
p5.2.h411.0011.66236.160.33293.210.6116.9442.54%28.66%
p5.2.i481.00254.94241.280.27403.040.84298.0249.84%16.21%
p5.2.j580.000.00319.000.30372.900.45119.0745.00%35.71%
p5.2.k670.00182.65396.310.35505.640.62183.3940.85%24.53%
p5.2.l800.00417.43432.000.29639.600.94493.5146.00%20.05%
p5.2.m860.00143.33468.270.30580.400.58179.2745.55%32.51%
p5.2.n925.50116.54535.460.33702.710.65146.2842.14%24.07%
p5.2.o1020.50159.42506.220.16739.430.67339.4950.39%27.54%
p5.2.p1151.00125.34632.500.30703.570.47162.8045.05%38.87%
p5.2.q1195.50125.72625.830.271048.230.85221.6847.65%12.32%
p5.2.r1260.00137.00814.560.421133.730.99189.5635.35%10.02%
p5.2.s1340.00340.25714.940.291140.000.82451.4646.65%14.93%
p5.2.t1390.00163.62738.770.281140.000.82302.4646.85%17.99%
p5.2.u1460.0081.36816.160.311134.600.79148.2944.10%22.29%
p5.2.v1501.00309.87742.500.211210.000.72485.3550.53%19.39%
p5.2.w1555.506.41966.820.381200.000.5884.6337.85%22.85%
p5.2.x1611.00154.07871.010.291163.060.59186.9345.93%27.81%
p5.2.y1645.5046.56894.800.301236.350.6388.7245.62%24.86%
p5.2.z1665.5055.66879.340.281245.520.58109.0147.20%25.22%
p5.3.b15.000.0010.980.3911.050.400.8226.80%26.33%
p5.3.c20.000.0016.580.6916.630.690.9217.10%16.85%
p5.3.d61.500.0037.320.2439.240.281.2839.32%36.20%
p5.3.e96.000.0060.910.3061.790.3110.8636.55%35.64%
p5.3.f111.50177.8374.940.31104.240.85200.4632.79%6.51%
p5.3.g186.000.26127.430.33132.880.3712.2131.49%28.56%
p5.3.h261.0043.64204.780.49209.120.52182.5021.54%19.88%
p5.3.i336.50358.96165.270.14223.320.45415.5250.89%33.63%
p5.3.j471.0086.41294.910.25306.580.27181.5637.39%34.91%
p5.3.k495.0023.70282.810.19418.650.71173.2042.87%15.42%
p5.3.l595.00100.64361.140.22423.610.41178.2739.30%28.81%
p5.3.m651.0057.13529.920.54534.940.56112.0518.60%17.83%
p5.3.n755.00364.78430.760.19524.230.40537.2942.95%30.57%
p5.3.o870.008.35476.180.16545.960.34192.3145.27%37.25%
p5.3.p991.50172.58527.340.15673.070.42534.6546.81%32.12%
p5.3.q1070.00122.58586.000.16872.300.72182.3045.23%18.48%
p5.3.r1125.0073.51619.960.17962.320.78189.3544.89%14.46%
p5.3.s1190.50366.21652.420.16973.050.73386.3445.20%18.27%
p5.3.t1261.00398.29884.140.34975.000.41452.4329.89%22.68%
p5.3.u1331.5023.41766.250.19966.690.44353.5242.45%27.40%
p5.3.v1425.50276.10824.510.191008.720.44342.6942.16%29.24%
p5.3.w1460.50259.43929.720.251123.080.51289.3436.34%23.10%
p5.3.x1536.0014.74847.830.171127.740.51324.9944.80%26.58%
p5.3.y1591.00118.641179.480.411273.310.58136.5625.87%19.97%
p5.3.z1635.0073.93871.740.151357.560.81129.3446.68%16.97%
p5.4.d20.000.0016.390.6716.650.690.9518.05%16.75%
p5.4.e20.000.0019.380.9419.470.950.993.10%2.65%
p5.4.f82.000.0070.100.5970.380.6051.8414.51%14.17%
p5.4.g141.000.0097.850.2499.540.25461.2230.60%29.40%
p5.4.h142.000.51118.720.50132.870.822.4816.39%6.43%
p5.4.i240.000.50132.460.09141.270.2687.9244.81%41.14%
p5.4.j341.001.54200.300.12205.130.1954.0541.26%39.84%
p5.4.k342.0085.54266.220.36301.990.62150.9722.16%11.70%
p5.4.l430.0055.51219.860.04302.470.34131.7248.87%29.66%
p5.4.m556.009.52301.850.09366.800.26202.9645.71%34.03%
p5.4.n622.00352.78387.660.14505.500.44481.8637.68%18.73%
p5.4.o690.50507.22406.650.12524.240.44538.5841.11%24.08%
p5.4.p761.0024.77448.520.07559.440.29312.1241.06%26.49%
p5.4.q861.0061.37630.610.29638.310.3416.5126.76%25.86%
p5.4.r961.00497.20454.000.08711.590.48575.3252.76%25.95%
p5.4.s1030.00298.06538.270.07686.340.28240.3147.74%33.37%
p5.4.t1160.00310.31633.360.09733.820.26153.0945.40%36.74%
p5.4.u1300.000.00730.600.10827.720.2594.7143.80%36.33%
p5.4.v1321.00387.32848.210.161037.400.41390.5635.79%21.47%
p5.4.w1390.0025.42802.990.111220.380.7831.5342.23%12.20%
p5.4.x1451.00425.36804.150.091278.550.94464.5644.58%11.88%
p5.4.y1520.00374.33932.680.141298.381.00410.3438.64%14.58%
p5.4.z1621.00279.07887.330.091300.001.00329.5545.26%19.80%
Average:807.66137.21467.690.28616.350.57211.8138.34%24.27%
Table 11. Results for p 6 instances in a stochastic scenario with a high variance ( c = 0.25 ).
Table 11. Results for p 6 instances in a stochastic scenario with a high variance ( c = 0.25 ).
Deterministic ScenarioStochastic ScenarioGAPS (%)
InstanceOBD
(1)
Time
(s)
OBS-D
(2)
Reliability
(3)
OBS-S
(4)
Reliability
(5)
Time
(s)
(1–2)(1–4)
p6.2.e360.0043.62194.940.29201.410.32101.2345.85%44.05%
p6.2.f588.000.00335.750.33335.900.4020.4542.90%42.87%
p6.2.g660.0074.66434.610.43438.590.47374.3534.15%33.55%
p6.2.h780.00227.99452.400.34617.810.91246.3442.00%20.79%
p6.2.i888.00353.52451.420.25486.750.37403.6049.16%45.19%
p6.2.j942.60207.13625.730.43702.370.59215.5333.62%25.49%
p6.2.k1032.0076.17748.200.53810.180.9082.3427.50%21.49%
p6.2.l1116.0020.09763.340.47774.610.5636.3531.60%30.59%
p6.2.m1188.0093.70729.430.38872.270.60267.9638.60%26.58%
p6.2.n1248.00155.93718.330.33758.980.39150.2442.44%39.18%
p6.3.g282.000.00155.410.17157.390.19501.5244.89%44.19%
p6.3.h444.00101.40252.340.18262.690.22281.1343.17%40.84%
p6.3.i642.0019.48347.650.16359.030.1963.6245.85%44.08%
p6.3.j828.300.00464.950.18468.220.25130.1043.87%43.47%
p6.3.k894.30447.73490.550.17641.570.43438.9545.15%28.26%
p6.3.l1002.00439.66622.380.23639.460.30510.9737.89%36.18%
p6.3.m1080.00214.91611.220.18820.470.88242.3443.41%24.03%
p6.3.n1158.00113.20594.230.14774.790.33409.9948.68%33.09%
p6.4.j366.000.00204.610.10207.250.11417.8544.10%43.37%
p6.4.k529.206.27291.960.09305.690.14189.4444.83%42.24%
p6.4.l696.00366.44402.170.11418.130.1579.7742.22%39.92%
p6.4.m912.000.78488.370.09505.820.1110.7346.45%44.54%
p6.4.n1068.000.00609.910.11625.690.142.1542.89%41.41%
Average:813.23128.81477.820.25529.790.39225.0841.79%36.32%
Table 12. Results for p 7 instances in a stochastic scenario with a high variance ( c = 0.25 ).
Table 12. Results for p 7 instances in a stochastic scenario with a high variance ( c = 0.25 ).
Deterministic ScenarioStochastic ScenarioGAPS (%)
InstanceOBD
(1)
Time
(s)
OBS-D
(2)
Reliability
(3)
OBS-S
(4)
Reliability
(5)
Time
(s)
(1–2)(1–4)
p7.2.a30.000.0020.950.4721.920.480.2930.17%26.93%
p7.2.b63.800.0061.470.9261.950.940.813.65%2.90%
p7.2.c102.101.0076.130.5496.540.921.9425.44%5.45%
p7.2.d190.300.77133.050.50157.200.750.8830.08%17.39%
p7.2.e290.307.83203.150.51234.930.7310.3430.02%19.07%
p7.2.f387.70114.82224.200.34347.210.92164.8042.17%10.44%
p7.2.g459.053.78254.840.32406.400.8642.0344.49%11.47%
p7.2.h520.95169.54348.050.45500.130.94185.6933.19%4.00%
p7.2.i578.95436.75371.640.42538.850.84541.5835.81%6.93%
p7.2.j638.35446.44366.360.32570.390.91530.7442.61%10.65%
p7.2.k689.05369.74494.810.52628.970.91435.0628.19%8.72%
p7.2.l722.05221.16437.160.36653.330.93318.2439.46%9.52%
p7.2.m791.25479.88452.780.31748.510.96495.7642.78%5.40%
p7.2.n850.95367.42489.400.33778.670.95457.9642.49%8.49%
p7.2.o905.55468.53634.880.47848.530.95559.5129.89%6.30%
p7.2.p935.95292.50491.850.28857.820.87313.2247.45%8.35%
p7.2.q966.95465.21657.580.41901.560.92471.5631.99%6.76%
p7.2.r1028.75312.64629.430.39907.800.89329.5638.82%11.76%
p7.2.s1057.00228.95740.770.47945.320.81388.3529.92%10.57%
p7.2.t1090.85339.00582.050.30882.490.83363.0446.64%19.10%
p7.3.b46.000.0043.090.8243.440.822.326.33%5.57%
p7.3.c78.800.0076.000.9176.680.931.433.55%2.69%
p7.3.d116.150.0592.400.42101.780.7014.6720.45%12.37%
p7.3.e175.45313.17145.650.54159.240.73313.5416.98%9.24%
p7.3.f248.700.66182.830.41211.940.7444.7926.49%14.78%
p7.3.g346.0513.70229.620.25287.980.7822.2133.65%16.78%
p7.3.h426.10165.84305.210.33389.490.91385.6428.37%8.59%
p7.3.i487.25283.73341.300.32452.410.92389.9929.95%7.15%
p7.3.j562.95456.76410.060.41508.720.94465.1027.16%9.63%
p7.3.k630.30504.05387.720.24558.550.90519.3838.49%11.38%
p7.3.l680.90214.92483.490.34630.900.86438.6028.99%7.34%
p7.3.m754.75226.50518.170.32690.180.90441.3531.35%8.56%
p7.3.n805.55305.03513.760.25734.140.93183.1336.22%8.86%
p7.3.o852.45143.70530.790.22763.630.87460.4937.73%10.42%
p7.3.p903.55372.14608.220.31833.090.93322.7332.69%7.80%
p7.3.q941.75264.96506.830.16827.730.89193.5746.18%12.11%
p7.3.r978.25181.69797.070.50906.910.92473.9118.52%7.29%
p7.3.s1040.25491.53831.160.48893.890.82457.0520.10%14.07%
p7.3.t1079.95262.28743.740.331004.300.90424.2031.13%7.00%
p7.4.b30.000.0021.090.4721.480.490.9829.70%28.40%
p7.4.c46.000.0046.001.0046.001.001.770.00%0.00%
p7.4.d79.100.0078.590.9878.630.991.950.64%0.59%
p7.4.e123.900.0099.800.43112.180.67123.2219.45%9.46%
p7.4.f165.451.09137.340.43154.650.997.1416.99%6.53%
p7.4.g219.0571.91168.520.31198.630.8683.7523.07%9.32%
p7.4.h287.30156.61228.510.37260.300.78171.8520.46%9.40%
p7.4.i367.550.50257.560.25327.450.926.0629.93%10.91%
p7.4.j463.7572.74288.560.16417.360.83138.6337.78%10.00%
p7.4.k519.05203.39362.190.26479.960.83372.7830.22%7.53%
p7.4.l590.05233.99349.440.12527.300.93272.7040.78%10.63%
p7.4.m645.10250.97462.620.25592.950.75337.5328.29%8.08%
p7.4.n726.00567.47495.920.23637.000.80516.2131.69%12.26%
p7.4.o778.20289.40495.470.16671.890.64295.9136.33%13.66%
p7.4.p836.75227.60571.300.19706.270.45407.2531.72%15.59%
p7.4.q901.1599.10639.990.27818.630.76158.0328.98%9.16%
p7.4.r964.25360.39685.700.26866.820.85490.0228.89%10.10%
p7.4.s1009.55431.09619.160.15931.320.91455.0738.67%7.75%
p7.4.t1053.8599.10867.790.45970.640.90559.4617.66%7.90%
Average:573.47206.76384.370.40516.950.85268.3829.32%9.98%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Panadero, J.; Barrena, E.; Juan, A.A.; Canca, D. The Stochastic Team Orienteering Problem with Position-Dependent Rewards. Mathematics 2022, 10, 2856. https://doi.org/10.3390/math10162856

AMA Style

Panadero J, Barrena E, Juan AA, Canca D. The Stochastic Team Orienteering Problem with Position-Dependent Rewards. Mathematics. 2022; 10(16):2856. https://doi.org/10.3390/math10162856

Chicago/Turabian Style

Panadero, Javier, Eva Barrena, Angel A. Juan, and David Canca. 2022. "The Stochastic Team Orienteering Problem with Position-Dependent Rewards" Mathematics 10, no. 16: 2856. https://doi.org/10.3390/math10162856

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop