Next Article in Journal
Determining Factors for Supply Chain Services Provider Selection and Long-Term Relationship Maintenance: Evidence from Greece
Previous Article in Journal
Building Agro-Industrial Capabilities in the Sugarcane Supply Chain in Brazil
 
 
Article
Peer-Review Record

Adaptive Large Neighborhood Search Metaheuristic for the Capacitated Vehicle Routing Problem with Parcel Lockers

by Amira Saker 1,2,*, Amr Eltawil 1,3 and Islam Ali 1,3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Submission received: 30 June 2023 / Revised: 5 September 2023 / Accepted: 25 September 2023 / Published: 9 October 2023

Round 1

Reviewer 1 Report

The ideas in this paper are good. However, this article still need to be revised as follows:

1.     The Introduction and Literatures review also need to be enhanced, especially on routing optimization problem and solution algorithms, and articles in recent years. These articles may be helpful for improving this paper: the fourth-party logistics routing problem using ant colony system-improved grey wolf optimization, 4pl routing problem using hybrid beetle swarm optimization, a hybrid metaheuristic algorithm for the multi-objective location-routing problem in the early post-disaster stage.

2.     In section 4, Destruction Operators, Maintenance Operators and Acceptance Criteria were used. Does the proposed framework of ALNS algorithm just proposed to utilize have any advantages? Please compare and contrast with the original framework to highlight the advantages of the proposed algorithm

3.     In Figure 8, the numbers don't match the legend labels.

4.     In section 5.4, for the comparison using ALNS and MIP, the solution of heuristic algorithm is limited by the length of running time, and the scientific validity of comparing the heuristic algorithms by limiting the length of their running time is questionable. Please do more to verify the ability of the ALNS from multi-angle.

5.     The proposed algorithms also can be compared with recently designed algorithms like CSOA. References like, credit portfolio management using two-level particle swarm optimization. That is one of the way to verify the contribution of the paper.

6.     It will be helpful to draw some pictures on analysis the ability of the proposed algorithm, comparing with others.

Minor editing of English language required

Author Response

Thank you for your thoughtful review of my work. Your feedback is valuable, and we are grateful for the time you've taken to share your insights. Your comments have prompted us to consider various aspects of the project. Here's how we plan to incorporate your feedback:

1.     The Introduction and Literatures review also need to be enhanced, especially on routing optimization problem and solution algorithms, and articles in recent years. These articles may be helpful for improving this paper: the fourth-party logistics routing problem using ant colony system-improved grey wolf optimization, 4pl routing problem using hybrid beetle swarm optimization, a hybrid metaheuristic algorithm for the multi-objective location-routing problem in the early post-disaster stage.

We agree that enhancing the coverage of the routing optimization problem and solution algorithms, as well as incorporating recent articles, will contribute significantly to the quality of our paper. We have carefully reviewed the articles you mentioned and certainly incorporate them into our revision to address the gaps you've highlighted. Added references 6 and 8 as the following:

[6]  F. Lu, W. Chen, W. Feng, and H. Bi, “4PL routing problem using hybrid beetle swarm optimization,” Soft comput, pp. 1–14, 2023.

[8]  F.-Q. Lu, M. Huang, W.-K. Ching, and T. K. Siu, “Credit portfolio management using two-level particle swarm optimization,” Inf Sci (N Y), vol. 237, pp. 162–175, 2013.

 

 

2.     In section 4, Destruction Operators, Maintenance Operators and Acceptance Criteria were used. Does the proposed framework of ALNS algorithm just proposed to utilize have any advantages? Please compare and contrast with the original framework to highlight the advantages of the proposed algorithm.

We have addressed the available requirements as the following:

    • Testing the performance of the ALNS parameters

To comprehensively evaluate the diverse parameters employed within the ALNS algorithm, a series of iterative analyses were conducted on a dataset encompassing various instances size. The results of these investigations, encapsulated in Table 6, meticulously document the computed averages of objective function values derived from 5 individual runs. These runs were conducted utilizing an assortment of distinct destroy (D) and repair (R) operator combinations, affording a comprehensive understanding of algorithmic performance across varied operational scenarios.

 

 

Table 6. Comparative Analysis of Various Destroy and Repair Combinations for ALNS Solution Approaches.

Instance

Data Size

1D1R

1D2R

2D1R

2D2R

3D1R

3D2R

4D1R

4D2R

5D1R

5D2R

r50_5_1

50

205

179

193

177

177

177

184

187

180

177

r50_5_2

50

209

185

204

182

201

184

184

199

184

184

r50_5_3

50

190

188

188

190

190

188

188

188

190

188

C101

100

2,514

2,513

2,497

2,498

2,518

2,494

2,501

2,477

2,464

2,403

RC1_2

200

7,284

7,093

7,268

7,070

7,280

7,020

7,322

7,148

7,116

6,842

RC_6

600

23858

22,170

23,732

22,066

22,779

21,221

21,847

21,121

21,104

20,436

R1_8

800

37,783

37,971

36,148

36,087

36,117

36,028

37,071

36,299

36,269

34,759

R2_8

800

37,720

36,786

38,643

38,131

36,684

36,572

36,551

35,832

36,047

35,131

C1_10

1000

44,856

46,461

41,718

43,429

44,029

41,369

44,130

44,810

41,711

41,131

C2_10

1000

42,551

41,086

42,450

42,085

43,454

40,553

40,883

40,593

40,906

40,114

 

 

3.     In Figure 8, the numbers don't match the legend labels.

We have addressed the issue you pointed out regarding Figure 8. The data in the table are now correspond accurately to the legend labels, ensuring a clear and coherent presentation of the data as the following:

Table 4: Comparison of the ALNS and MIP solution approaches applied to 8, 10, and 12 customers.

 

Number of Customers

Radar Chart 0f 20 iterations

ALNS Run Times

MIP Run Times

Objective Function Values

 

8

0.2%

 

10^ -5 seconds

0.4 seconds

429

 

10

0.02 seconds

0.7 seconds

490

 

12

0.9   seconds

14 seconds

578

4.     In section 5.4, for the comparison using ALNS and MIP, the solution of heuristic algorithm is limited by the length of running time, and the scientific validity of comparing the heuristic algorithms by limiting the length of their running time is questionable. Please do more to verify the ability of the ALNS from multi-angle.

 

             

In Table 7, the MIP model was already executed over a duration of five hours, a temporal allocation that may be deemed incongruous with pragmatic considerations. However, it is noteworthy that this prolonged runtime was undertaken by the authors with the explicit intent of attaining better results as possible. This protracted computational endeavor was subsequently contrasted with a distinct ALNS approach, wherein simulations were conducted over a prolonged interval of ten minutes. It is imperative to underscore that the endeavor to secure superior outcomes within short runtimes, less than two minutes, constitutes a compelling facet of competitiveness, a pursuit that has been efficaciously realized and substantiated within the corpus of our paper.

In VRP, where large instances are involved, time holds significant importance. Our objective is to identify efficient algorithms that provide practical solutions within reasonable timeframes. Real-world applicability necessitates a balance between solution quality and time. Our analysis also explores ALNS behavior as running time increases within an acceptable range according to the literature.

    • Testing the performance of the ALNS aginst Extended MIP Runs

To prove the ALNS algorithm quality, large instances of (1000 customers) were solved by the MIP model with five hours runtimes and compared with the heuristic solutions at 2, 5, and 10 minutes (average of 5 runs are considered). Gaps between the two solutions’ methods were reported in Table 7. The detailed results for each instance are reported in Table A. 1 in Appendix A. Although it is not practical to run the MIP model for this considerable time, and it will occupy a massive memory of the RAM, this is done to validate the algorithm's performance.

Table 7. Comparison of the MIP and ALNS solution approaches applied to 1000 customer instances. For MIP, the found solutions after five hours of runtime were reported. For ALNS, the found solutions after (2,5,10 minutes) and the percentage gap to the best solution found by MIP at these runtimes were reported.

Instance

MIP

5 hrs.

 

 

ALNS

1 min.

 

ALNS

2 min.

 

ALNS

5 min.

 

ALNS

10 mins.

 

 

 

 

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

C1_10

37,811

72%

 

39,268

4%

 

37,751

0%

 

36,287

-4%

 

35,624

-6%

C2_10

39,753

54%

 

41,483

4%

 

38,968

-2%

 

36,912

-7%

 

35,648

-10%

R1_10

54,370

47%

 

47,738

-12%

 

47,017

-14%

 

45,731

-16%

 

44,526

-18%

R2_10

52,413

45%

 

48,730

-7%

 

47,272

-10%

 

45,443

-13%

 

44,340

-15%

RC1_10

53,400

57%

 

45,671

-14%

 

44,642

-16%

 

43,020

-19%

 

42,127

-21%

Avg.

47,549

55%

 

44,578

-5%

 

43,130

-8%

 

41,478

-12%

 

40,453

-14%

 

 

 

5.     The proposed algorithms also can be compared with recently designed algorithms like CSOA. References like, credit portfolio management using two-level particle swarm optimization. That is one of the way to verify the contribution of the paper.

Thank you for your insightful comment regarding the potential comparison of our algorithm with CSOA. While we concur with the efficacy of CSOA, it is crucial to delineate that our study employs distinct problem variants, rendering a direct algorithmic comparison untenable.

Our recognition of CSOA's relevance is reflected in its citation within the introduction, underscoring its significance in the broader problem domain. While our present study does not encompass this comparison, we are actively considering its implementation in forthcoming research endeavors.

 

 

6.     It will be helpful to draw some pictures on analysis the ability of the proposed algorithm, comparing with others.

Unfortunately, the absence of existing literature on the Capacitated Vehicle Routing Problem with Delivery Options (CVRPDO) limits the feasibility of direct algorithmic comparisons. As an alternative, we have chosen to juxtapose our proposed Adaptive Large Neighborhood Search (ALNS) algorithm with the established Mixed-Integer Programming (MIP) model.

To ensure a comprehensive evaluation, we have incorporated the MIP model's optimality gap into our results. This enriches the interpretation of the performance contrast between ALNS and the established optimization standard.

The MIP gap to optimality is reported for tables from 4 to 10 as the following:

Table 8. Comparative Analysis of MIP and ALNS Solution Approaches on 1000 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 3 Hours of Runtime) at ALNS Runtimes of 30, 60, 90, and 120 Seconds.

Instance

MIP

3 hrs.

 

 

ALNS

30 sec

 

ALNS

60 sec

 

ALNS

90 sec

 

ALNS

120 sec

 

 

 

 

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

C1_10

48,058

78%

 

41,131

-14%

 

39,268

-18%

 

38,366

-20%

 

37,751

-21%

C2_10

47,728

62%

 

43,114

-10%

 

41,483

-13%

 

40,036

-16%

 

38,968

-18%

R1_10

69,597

59%

 

49,655

-29%

 

47,738

-31%

 

47,321

-32%

 

47,017

-32%

R2_10

69,010

59%

 

50,866

-26%

 

48,730

-29%

 

47,981

-31%

 

47,272

-32%

RC1_10

55,882

59%

 

47,048

-16%

 

45,671

-18%

 

44,702

-20%

 

44,642

-20%

Avg.

58,055

63%

 

46,363

-19%

 

44,578

-22%

 

43,681

-24%

 

43,130

-25%

 

Table 9. Comparative Analysis of MIP and ALNS Solution Approaches on 800 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 3 Hours of Runtime) at ALNS Runtimes of 30, 60, 90, and 120 Seconds.

Instance

MIP

3 hrs.

 

 

ALNS

30 sec

 

ALNS

60 sec

 

ALNS

90 sec

 

ALNS

120 sec

 

 

 

 

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

C1_8

43,489

81%

 

27,893

-36%

 

27,279

-37%

 

26,891

-38%

 

26,646

-39%

C2_8

40,022

63%

 

30,549

-24%

 

29,330

-27%

 

28,770

-28%

 

28,422

-29%

R1_8

49,105

59%

 

35,861

-27%

 

34,752

-29%

 

33,983

-31%

 

33,490

-32%

R2_8

43,725

54%

 

35,131

-20%

 

34,763

-20%

 

34,042

-22%

 

33,722

-23%

RC1_8

45,585

64%

 

35,623

-22%

 

34,639

-24%

 

34,139

-25%

 

33,807

-26%

Avg.

44,385

64%

 

33,012

-26%

 

32,153

-28%

 

31,565

-29%

 

31,217

-30%

Table 10. Comparative Analysis of MIP and ALNS Solution Approaches on 600 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 3 Hours of Runtime) at ALNS Runtimes of 30, 60, 90, and 120 Seconds.

Instance

MIP

3 hrs.

 

 

ALNS

30 sec

 

ALNS

60 sec

 

ALNS

90 sec

 

ALNS

120 sec

 

 

 

 

 

Sol

 

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

C1_6

17,706

64%

 

18,862

7%

 

18,452

4%

 

18,140

2%

 

17,884

1%

C2_6

21,190

46%

 

21,491

1%

 

20,709

-2%

 

19,935

-6%

 

19,663

-7%

R1_6

22,780

43%

 

21,458

-6%

 

21,072

-7%

 

20,873

-8%

 

20,729

-9%

R2_6

23,286

44%

 

21,733

-7%

 

21,209

-9%

 

21,003

-10%

 

20,796

-11%

RC1_6

20,978

49%

 

20,325

-3%

 

19,871

-5%

 

19,618

-6%

 

19,392

-8%

Avg.

13,441

49%

 

20,774

-2%

 

20,263

-4%

 

19,914

-6%

 

19,693

-7%

Table 11. Comparative Analysis of MIP and ALNS Solution Approaches on 400 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 3 Hours of Runtime) at ALNS Runtimes of 30, 60, 90, and 120 Seconds.

Instance

MIP

3 hrs.

 

 

ALNS

30 sec

 

ALNS

60 sec

 

ALNS

90 sec

 

ALNS

120 sec

 

 

 

 

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

C1_4

12,518

69%

 

12,662

1%

 

12,518

0%

 

12,283

-2%

 

12,133

-3%

C2_4

12,523

51%

 

12,697

1%

 

12,343

-1%

 

12,252

-2%

 

12,218

-2%

R1_4

14,414

45%

 

14,424

0%

 

13,877

-4%

 

13,579

-6%

 

13,524

-6%

R2_4

14,490

49%

 

14,638

1%

 

14,105

-3%

 

13,985

-3%

 

13,916

-4%

RC1_4

14,034

55%

 

13,811

-2%

 

13,501

-4%

 

13,211

-6%

 

13,026

-7%

Avg.

13,596

54%

 

13,646

0%

 

13,269

-2%

 

13,062

-4%

 

12,963

-5%

Table 12. Comparative Analysis of MIP and ALNS Solution Approaches on 200 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 3 Hours of Runtime) at ALNS Runtimes of 30, 60, 90, and 120 Seconds.

Instance

MIP

3 hrs.

 

 

ALNS

30 sec

 

 

60 sec

 

ALNS

90 sec

 

ALNS

120 sec

 

 

 

 

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

C1_2

6,757

70%

 

6,834

1%

 

6,679

-1%

 

6,555

-3%

 

6,489

-4%

C2_2

6,998

57%

 

6,734

-4%

 

6,667

-5%

 

6,626

-5%

 

6,583

-6%

R1_2

7,735

55%

 

7,054

-9%

 

6,973

-10%

 

6,944

-10%

 

6,917

-11%

R2_2

7,368

52%

 

7,112

-3%

 

6,962

-6%

 

6,911

-6%

 

6,887

-7%

RC1_2

6,725

55%

 

6,779

1%

 

6,617

-2%

 

6,593

-2%

 

6,567

-2%

Avg.

7,117

56%

 

6,903

-3%

 

6,780

-5%

 

6,726

-5%

 

6,689

-6%

 

 

 

Table 13. Comparative Analysis of MIP and ALNS Solution Approaches on 100 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 3 Hours of Runtime) at ALNS Runtimes of 10, 20, 30, and 40 Seconds.

Instance

MIP

3 hrs.

 

 

ALNS

10 sec

 

ALNS

20 sec

 

ALNS

30 sec

 

ALNS

40 sec

 

 

 

 

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

C101

2,390

63%

 

2,466

3%

 

2,405

1%

 

2,403

1%

 

2,403

1%

C201

2,615

55%

 

2,810

7%

 

2,757

5%

 

2,720

4%

 

2,714

4%

R101

2,274

37%

 

2,349

3%

 

2,320

2%

 

2,284

0%

 

2,284

0%

R201

2,192

37%

 

2,204

1%

 

2,182

0%

 

2,173

-1%

 

2,167

-1%

Rc101

2,904

53%

 

3,150

8%

 

3,090

6%

 

3,054

5%

 

3,021

4%

Avg.

2,475

49%

 

2,596

5%

 

2,551

3%

 

2,527

2%

 

2,518

2%

Table 14. Comparative Analysis of MIP and ALNS Solution Approaches on 50 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 0.5 Hour of Runtime) at ALNS Runtimes of 5, 10, 15, and 20 Seconds.

Instance

MIP

0.5 hr.

 

ALNS

5 sec

 

ALNS

10 sec

 

ALNS

15 sec

 

ALNS

20 sec

 

 

 

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

r50_5_1

180

29%

 

178

-1%

 

177

-2%

 

177

-2%

 

177

-2%

r50_5_2

185

31%

 

185

0%

 

185

0%

 

185

0%

 

184

0%

r50_5_3

188

34%

 

189

0%

 

188

0%

 

188

0%

 

188

0%

r50_5_4

187

36%

 

184

-2%

 

184

-2%

 

183

-2%

 

183

-2%

r50_5_5

197

37%

 

195

-1%

 

195

-1%

 

193

-2%

 

193

-2%

Avg.

187

33%

 

186

-1%

 

186

-1%

 

185

-1%

 

185

-1%

 

Author Response File: Author Response.pdf

Reviewer 2 Report

The authors observed the topic of capacitated vehicle routing problem with parcel lockers for which they presented MIP and ALNS solution approaches. The paper is fairly good written, on a very interesting topic. There are some issues regarding the problem/model descriptions that require further consideration by the authors. My biggest concern regarding this submission, is the scientific contribution of the presented models and research. Also, results section needs improvement. The detailed remarks are given in the following. I hope that these comments can help authors to improve their research.

At line 19 authors stated: ”Also, it was found that the ALNS algorithm outperformed the solver solution, especially with large size instances.” Authors should emphasise in what exact aspect (or aspects) was ALNS better.

Keywords should better describe the topic and the content of the paper.

The term shared locations should be defined in the introduction section of the paper (one of two sentences, a short definition).

At line 192 authors stated: ”In the routes’ representation part, the absence of an empty list ([]), which signifies routes without any customers, is crucial.” Why is this crucial? It seems that lines from 192-211 should not be a part of the Problem description, but should be a part of some heuristic approach (same goes for segment in lines 224-236). Problem description should describe the problem, not the solution procedures or approaches for that problem. Also, why do you focus on only two erroneous cases in line 197 (I assume that there are more of these invalid solutions that should be discarded in the search procedures, e.g., one node in two routes or at two positions in a single route etc). Please elaborate on these issues.

Most of the content in lines 238-258 should be a part of problem description (these descriptions and constraints are not only important for mathematical mode, but are also related to the heuristic approach, in other words they define the problem itself).

Line 266, why is F define in plural (sets)?

In Table 1 abbreviation SDL is not defined. Does it stand for Shared Delivery Location? If that is the case, define this in the text and use this term instead of Shared Location. In the same table, authors introduced parameter Qv, but later they use Q (constraints 10). Please check all notations usages in the paper.

Regarding the usage of term constraint in the MIP description, authors should use plural. For example, at line 237: “Constraint (2) guarantees that each …” but there are ∀? ? constraints of this type. Same goes for the description of all constraints. Please make a correction.

Section 3.2 should not be a part of the Problem description section, but a standalone section. Section 4 title is not adequate, it should include the type of solution method described in that section.

Authors used MIP model developed by Simona Mancini, that was extended by two capacity constraints. The MIP extension is not significant in the sense of contribution to the research. Also, authors used ALNS package developed by N. Wouda (lines 305-306). My biggest concern regarding this submission, is the scientific contribution of the presented models and research. What is the originality of the proposed research, as well as the scientific novelty and contribution, especially considering [28] and [29]?

The ALNS Framework in Figure 4. with the following text that describes it, should be improved (Section 4.). Where is the local search in Figure 4? How did you exactly implement the roulette wheel selection, on what operators, are they individual or in pairs “Destroy-Repair”, etc? What is the Termination condition?

What is the MIP gap to optimality for large scale instances (1000 customers, Table 4). Detailed results for the cases of 8, 10 and 12 customers should be also presented in the form of table (including the computational times for both approaches). Presentation of the results should include the optimality of solutions obtained from the MIP model (in how many instances was optimality reached, what is the gap to optimality if it was not reached etc.) It seems that there is a large gap in the scale of test instances between 12 and 100 customers. Authors should add at least one more scale of 50 customers test instances, with detailed results.

Author Response

Thank you for your insightful review of my manuscript, "[Adaptive Large Neighborhood Search Metaheuristic for the Capacitated Vehicle Routing Problem with Parcel Lockers]." Your feedback is greatly appreciated and has significantly improved the quality of the paper. Your expertise and thoughtful comments have been invaluable in refining the content. We are genuinely grateful for your time and dedication to this review process. Here's our outlined strategy for integrating your feedback:

At line 19 authors stated:” Also, it was found that the ALNS algorithm outperformed the solver solution, especially with large size instances.” Authors should emphasise in what exact aspect (or aspects) was ALNS better.

To emphasise in what exact aspect ALNS was better this sentence “The results of objective function values were improved when solving the problem by ALNS for 120 seconds runtime compared with the MIP model (with a runtime of 3 hours) by 25%, 30%, 7%, 5%, and 6% for 1000, 800, 600, 400, and 200 customers, respectively.”.

Keywords should better describe the topic and the content of the paper.

Keywords were improved as the following: “Keywords: Delivery Options; Shared Delivery Locations; Parcel Lockers; ALNS.”.

The term shared locations should be defined in the introduction section of the paper (one of two sentences, a short definition).

The term shared locations is defined in the introduction section at lines from 32-36 as the following: “For instance, depending on an Alternate Delivery Program allows packages to be delivered to designated Shared Delivery Locations (SDL) rather than the recipient's primary residence or workplace [2]. SDLs, such as parcel lockers, are generally located in places that open around the day, e.g., supermarkets and railway stations [3].”.

 

 

At line 192 authors stated: ”In the routes’ representation part, the absence of an empty list ([]), which signifies routes without any customers, is crucial.” Why is this crucial?

The paragraph has been revised for enhanced clarity as the following: “In the routes’ representation part, any list […] represents a vehicle, and the number of these list is the number of utilized vehicles. As a result, the empty list ([]), which signifies vehicles without any customers, must be removed. Routes lacking customers are deemed invalid and subsequently removed from the solution. However, if a SDL exists but remains unvisited, it will be denoted as an empty list ([]) and this list will not be removed from the solution representation. Indeed, this representation denotes the presence of the node while acknowledging its non-visited.”.

It seems that lines from 192-211 should not be a part of the Problem description, but should be a part of some heuristic approach (same goes for segment in lines 224-236). Problem description should describe the problem, not the solution procedures or approaches for that problem. Also, why do you focus on only two erroneous cases in line 197 (I assume that there are more of these invalid solutions that should be discarded in the search procedures, e.g., one node in two routes or at two positions in a single route etc). Please elaborate on these issues.

In this paper, the authors have introduced a novel representation paradigm for addressing the vehicle routing problem in the context of delivery options. This representation framework is intentionally agnostic to any particular solution methodology, embodying a generalizable approach. Consequently, our attention is directed towards the identification of inaccuracies or errors intrinsic to this representation.

 

 

 

Most of the content in lines 238-258 should be a part of problem description (these descriptions and constraints are not only important for mathematical mode, but are also related to the heuristic approach, in other words they define the problem itself).

These lines transferred to the Problem Description and Capacity Synchronization as the following:

  1. Problem Description

In the context of the VRP, the deliveries are exclusively made to the predetermined locations of the customers. However, in the specific problem examined within this research, referred to as the CVRPDO, the requests assigned to customers may be delivered either directly to their designated locations or to a SDL proximate to them. Subsequently, the customers have the flexibility to retrieve their parcels from this SDL at their convenience. The CVRPDO problem entails a scenario where a specific number of customers (I) are considered, with each customer having a different demand size. The problem is formulated with the central depot ( ) as the starting point for all vehicles' routes. The SDLs (f) are represented as customer nodes. The objective is to deliver each customer's demand either directly to their location or to one of the SDLs, from which the customer can subsequently collect their requests. However, customers are limited to being assigned only to a subset of SDLs that fall within a specified distance limit denoted as "?", as shown in Figure 1.

 

Figure 1: Boundaries for Customer Assignment to SDLs within Distance Limit (?).

  • Capacity Synchronization

The CVRDO entails the consideration of two distinct types of capacities. Firstly, the SDL capacity, denoted as Bf, is influenced by factors such as the availability of unutilized lockers at each SDL. It is assumed that these lockers possess a predetermined size. Consequently, when a customer's request is allocated to a SDL, it is accommodated by only one of the available lockers, irrespective of the size of the request. The capacity of a SDL is contingent upon the quantity of customers that can be served within this SDL. Secondly, vehicle capacity constraints encompass the consideration of the size of customer requests, specifically the number of units (demand size) within each request.

When examining the capacity of the SDLs, the demands of individual customers are considered as a single unit. This approach assumes that the size of the lockers is not a determining factor. However, when considering the capacity of the routes, it becomes crucial to account for the request size associated with each customer. For example, suppose customer x has five parcels to be delivered to SDL f. In that case, the capacity value for the route will be incremented by 5, and one additional locker in the SDL will be occupied as a result.

Figure 2 presents a graphical depiction of a proposed solution (S). The depicted scenario portrays a singular depot characterized by the node denoted as (0), encompassing a total of eight customers denoted by nodes (1-8), and two SDL nodes identified as (9, 10).  Considering This example, let us assume that the capacity of each SDL is uniformly set at 3 (thus, the length of the SDL lists will not exceed 3). Additionally, the vehicle capacity constraints are defined as 15 units per vehicle. Consequently, when attempting to add a customer to a SDL, it is essential to follow the subsequent sequence of capacity checks:

  1. Verification of the SDL list length: Ensure that the number of customers associated with the SDL is less than or equal to the SDL capacity (in this example, 3).
  2. Calculation of the SDL demand after incorporating the new customer: Assess the total demand attributed to the SDL by considering the demands of all customers assigned to it, including the new addition.
  3. Recheck the vehicle capacity constraints if a customer is assigned to a SDL visited by that vehicle. This verification step is necessary as the SDL's demands may differ from the initial check for this constraint.

If any of the aforementioned capacity checks fail to meet the designated criteria, the addition of the customer violates one or more of the capacity constraints.

  • Solution Representation

In the example given in Figure 2, the solution representation consists of two distinct parts: the first part represents the routes, excluding the inclusion of the depot at both the beginning and end of each tour, while the second component illustrates the representation of customers who will pick up their parcels from SDLs. To elucidate this example, there are two routes: R1 (0, 1, 2, 3, 9, 0) and R2 (0, 10, 7, 8, 0). In this scenario, customers (4, 5) are assigned to pickup their requests from the SDL (9), whereas customer (6) is responsible for picking up their parcels from the SDL (10).

 

Figure 2: Graphical representation of a solution (S)

In the routes’ representation part, any list […] represents a vehicle, and the number of these list is the number of utilized vehicles. As a result, the empty list ([]), which signifies vehicles without any customers, must be removed. Routes lacking customers are deemed invalid and subsequently removed from the solution. However, if a SDL exists but remains unvisited, it will be denoted as an empty list ([]) and this list will not be removed from the solution representation. Indeed, this representation denotes the presence of the node while acknowledging its non-visited.

Figure 3 serves as an illustrative example, demonstrating erroneous solutions (S1, S3) and their respective corrections (S2, S4). Solution S1 is considered incorrect as it includes empty routes, which must be removed from the solution set. Similarly, solution S3 is deemed erroneous since SDL (10) remains unvisited by any customers. Consequently, SDL (10) should not be considered in any route within the solution.

 

Figure 3Incorrect Solution Representation and Their Corrections

The demands associated with the SDL must be incorporated to ascertain compliance with vehicle capacity constraints. If a SDL remains unoccupied, its demand is deemed zero, negating the requirement for any vehicle to visit it. Conversely, if the SDL contains parcels to be picked up, the aggregate demand is computed as the sum of the demands corresponding to the customers assigned to retrieve their parcels from this SDL. This aggregation can be expressed mathematically for solution (S) in Figure 2 as follows:

d[9] 

=

d[4]+ d[5]

=

5

d[10] 

=

d[6]

=

6

Capacity [R1]

=

d[1]+ d[2]+ d[3]+ d[9]

=

13

Capacity [R2]

=

d[10]+ d[7]+ d[8]

=

15

  1. Mathematical Model Formulation

 

Line 266, why is F define in plural (sets)?

This mistake is corrected as the following: “F  Set of all shared delivery locations”.

In Table 1 abbreviation SDL is not defined. Does it stand for Shared Delivery Location? If that is the case, define this in the text and use this term instead of Shared Location. In the same table, authors introduced parameter Qv, but later they use Q (constraints 10). Please check all notations usages in the paper.

SDL abbreviation is defined in the introduction as the following: “For instance, depending on an Alternate Delivery Program allows packages to be delivered to designated Shared Delivery Locations (SDL) rather than the recipient's primary residence or workplace”.

The notations have been checked.

 

 

Regarding the usage of term constraint in the MIP description, authors should use plural. For example, at line 237: “Constraint (2) guarantees that each …” but there are ∀? ∈ ? constraints of this type. Same goes for the description of all constraints. Please make a correction.

The constraints were corrected as the following: “Constraints (2) guarantee that each request is either delivered to a SDL or directly to a customer's location. In order to maintain the continuity of flow in the model, constraints (3) are enforced. Constraints (4) and (5) are responsible for allocating SDLs only if at least one customer intends to visit them.

To ensure compliance with the capacity restrictions of SDLs, constraints (6) are imposed. If the distance between a customer and a SDL exceeds a predetermined limit, constraints (7) prevent the assignment of the customer to that SDL. Constraints (8) to (10) are implemented to prevent the total route capacity from surpassing the predefined limit.”.

Section 3.2 should not be a part of the Problem description section, but a standalone section. Section 4 title is not adequate, it should include the type of solution method described in that section.

The mathematical Model is separated into section. 4. Section 4 becomes section 5 and titled: “5. Solution Using Adaptive Large Neighbourhood Search Approach”. So, the titles became as the following:

 

 

 

 

Authors used MIP model developed by Simona Mancini, that was extended by two capacity constraints. The MIP extension is not significant in the sense of contribution to the research. Also, authors used ALNS package developed by N. Wouda (lines 305-306). My biggest concern regarding this submission, is the scientific contribution of the presented models and research. What is the originality of the proposed research, as well as the scientific novelty and contribution, especially considering [28] and [29]?

Considering the vehicular capacity constraint enhances the practicality and relevance of the problem under consideration. This is particularly pertinent in scenarios involving the last mile, where vehicle dimensions are often constrained compared to those encountered in high-speed routes. Moreover, this research introduces an innovative solution representation that effectively augments the efficacy of optimization algorithms.

The alns package, a software module of a general nature, is conceived within the Object-Oriented Programming (OOP) paradigm, epitomizing a versatile architectural construct comprising a set of interrelated classes. To benefit from it in any other problem such as in our case we adopted this framework with our initial solution, objective function, repair operators, destroy operators, acceptance criteria, and operator criteria.

For more clarification, these paragraphs have been rewritten:

“This paper presents various noteworthy contributions. First, it addresses a gap within the current literature since existing studies on the vehicle routing problem in the context of delivery options often either disregard vehicle capacity constraints entirely or overly simplify them, reducing them to the maximal count of customers that a vehicle can accommodate. In contrast, this paper introduces a more nuanced treatment of vehicle capacity constraints, encompassing the customer’s request size, quantified by the number of units (demand size) within each request. Moreover, the paper incorporates consideration of SDL capacity, denoting the count of utilized lockers within a SDL, allowing for synchronization among SDLs capacity and vehicle capacity. Besides, the paper provides an efficient implementation of the Adaptive Large Neighborhood Search (ALNS) which was tailored to solve the problem in question. An innovative solution representation is introduced. Furthermore, several repair and destroy operators have been incorporated into the framework's implementation. While most existing literature primarily focuses on problem instances with up to 400 customers, this paper extends the scope by considering instances with up to 1000 customers.”

 

 

 

“This research paper benefits from successfully implementing the alns package, originally developed by N. Wouda [32], using Python. This package was originally developed to provide solutions to simple vehicle routing and cutting stock problems. The authors have embraced this overarching framework and extended it to address the CVRPDO problem variant. The framework was extended by introducing an innovative solution representation suitable to the problem in question. Furthermore, the objective function was adjusted, and custom repair and destroy operators have been incorporated into the framework's implementation.”

 

 

 

 

The ALNS Framework in Figure 4. with the following text that describes it, should be improved (Section 4.). Where is the local search in Figure 4? How did you exactly implement the roulette wheel selection, on what operators, are they individual or in pairs “Destroy-Repair”, etc? What is the Termination condition?

Figure 4 improved by adding the acceptance criteria: Hill climbing (which is a local search) and Record-to-record Travel. Roulette wheel operators work on selecting the most appropriate combination of operators (pairs of destroy and repair) based on various factors such as problem characteristics, solution quality, and exploration-exploitation balance.

 

Figure 4. The ALNS Framework for CVRPDO with six destroy operators and two repair operators.

 

 

What is the MIP gap to optimality for large scale instances (1000 customers, Table 4). Detailed results for the cases of 8, 10 and 12 customers should be also presented in the form of table (including the computational times for both approaches). Presentation of the results should include the optimality of solutions obtained from the MIP model (in how many instances was optimality reached, what is the gap to optimality if it was not reached etc.) It seems that there is a large gap in the scale of test instances between 12 and 100 customers. Authors should add at least one more scale of 50 customers test instances, with detailed results.

The MIP gap to optimality is reported for tables from 4 to 10 and the 50 customers instances were added to the results as the following:

    • Comprehensive Evaluation of the ALNS Performance

In this part of the computational study, the performance of the proposed solution approach ALNS is compared with the commercial solver (MIP) applied to the model in Section 3.2. Results are summarized in tables Table 8 to Table 14 for different sizes of instances. For 1000, 800, 600, 400, 200, and 100 customers instances, the MIP model ran for 3 hours runtimes, and 0.5-hour runtimes for 50 customers instance. While the ALNS runtimes were (30, 60, 90 and 120) seconds for instances of sizes (1000, 800, 600, 400, and 200) customers , (10, 20, 30, 40) seconds for 100 customer’s instances, and (5, 10, 15, 20) seconds for 50 customer’s instances. For each instance, an average of 5 runs are considered. The detailed results for each instance are reported in Appendix B in Table B. 1 to Table B. 6.

Table 8, Table 9, and Table 10 present the results of 1000, 800, and 600 customer instances. The results reveal a clear superiority of the ALNS algorithm over the MIP model for all instances. In a matter of seconds, the ALNS algorithm outperformed the MIP model, which had taken 3 hours to solve, by achieving significantly better results. For 1000 customer instances, the gap of the ALNS is 19%, 22%, 24%, and 25% better than the MIP for 30, 60, 90, and 120 seconds runtimes, respectively. Across 800 customer instances, the ALNS algorithm demonstrates a remarkable performance advantage over the MIP model, achieving a significant improvement ranging from 35% to 40% within a mere two-minute runtime. Across a dataset of 600 customer instances, the utilization of the ALNS algorithm resulted in a noteworthy average improvement ranging from 2% to 7% compared to the MIP model.

Table 8. Comparative Analysis of MIP and ALNS Solution Approaches on 1000 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 3 Hours of Runtime) at ALNS Runtimes of 30, 60, 90, and 120 Seconds.

Instance

MIP

3 hrs.

 

 

ALNS

30 sec

 

ALNS

60 sec

 

ALNS

90 sec

 

ALNS

120 sec

 

 

 

 

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

C1_10

48,058

78%

 

41,131

-14%

 

39,268

-18%

 

38,366

-20%

 

37,751

-21%

C2_10

47,728

62%

 

43,114

-10%

 

41,483

-13%

 

40,036

-16%

 

38,968

-18%

R1_10

69,597

59%

 

49,655

-29%

 

47,738

-31%

 

47,321

-32%

 

47,017

-32%

R2_10

69,010

59%

 

50,866

-26%

 

48,730

-29%

 

47,981

-31%

 

47,272

-32%

RC1_10

55,882

59%

 

47,048

-16%

 

45,671

-18%

 

44,702

-20%

 

44,642

-20%

Avg.

58,055

63%

 

46,363

-19%

 

44,578

-22%

 

43,681

-24%

 

43,130

-25%

Table 9. Comparative Analysis of MIP and ALNS Solution Approaches on 800 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 3 Hours of Runtime) at ALNS Runtimes of 30, 60, 90, and 120 Seconds.

Instance

MIP

3 hrs.

 

 

ALNS

30 sec

 

ALNS

60 sec

 

ALNS

90 sec

 

ALNS

120 sec

 

 

 

 

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

C1_8

43,489

81%

 

27,893

-36%

 

27,279

-37%

 

26,891

-38%

 

26,646

-39%

C2_8

40,022

63%

 

30,549

-24%

 

29,330

-27%

 

28,770

-28%

 

28,422

-29%

R1_8

49,105

59%

 

35,861

-27%

 

34,752

-29%

 

33,983

-31%

 

33,490

-32%

R2_8

43,725

54%

 

35,131

-20%

 

34,763

-20%

 

34,042

-22%

 

33,722

-23%

RC1_8

45,585

64%

 

35,623

-22%

 

34,639

-24%

 

34,139

-25%

 

33,807

-26%

Avg.

44,385

64%

 

33,012

-26%

 

32,153

-28%

 

31,565

-29%

 

31,217

-30%

 

Table 10. Comparative Analysis of MIP and ALNS Solution Approaches on 600 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 3 Hours of Runtime) at ALNS Runtimes of 30, 60, 90, and 120 Seconds.

Instance

MIP

3 hrs.

 

 

ALNS

30 sec

 

ALNS

60 sec

 

ALNS

90 sec

 

ALNS

120 sec

 

 

 

 

 

Sol

 

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

C1_6

17,706

64%

 

18,862

7%

 

18,452

4%

 

18,140

2%

 

17,884

1%

C2_6

21,190

46%

 

21,491

1%

 

20,709

-2%

 

19,935

-6%

 

19,663

-7%

R1_6

22,780

43%

 

21,458

-6%

 

21,072

-7%

 

20,873

-8%

 

20,729

-9%

R2_6

23,286

44%

 

21,733

-7%

 

21,209

-9%

 

21,003

-10%

 

20,796

-11%

RC1_6

20,978

49%

 

20,325

-3%

 

19,871

-5%

 

19,618

-6%

 

19,392

-8%

Avg.

13,441

49%

 

20,774

-2%

 

20,263

-4%

 

19,914

-6%

 

19,693

-7%

Table 11, Table 12, Table 13, and Table 14 showcase the outcomes obtained from 400, 200, 100, and 50 customer instances, respectively. In the case of 400 customer instances, the ALNS algorithm accomplished a solution comparable to that of the MIP model after only 30 seconds of runtime. Furthermore, the results improved to achieve a 5% advantage over the MIP model at the 120-second mark. Across a dataset of 200 customer instances, the ALNS algorithm consistently exhibits a notable performance advantage over the MIP model, showcasing average improvements ranging from 3% to 6% within a mere two-second runtime. Despite the ALNS algorithm operating on a dataset of 100 customer instances with a gap ranging from 2 to 5 percentage points smaller than the MIP model, the ALNS runtime remains under one minute, while the MIP model takes approximately three hours to complete. For 50 customer instances, the ALNS heuristics slightly improved the solutions obtained by the MIP model by an average of 1%.

Table 11. Comparative Analysis of MIP and ALNS Solution Approaches on 400 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 3 Hours of Runtime) at ALNS Runtimes of 30, 60, 90, and 120 Seconds.

Instance

MIP

3 hrs.

 

 

ALNS

30 sec

 

ALNS

60 sec

 

ALNS

90 sec

 

ALNS

120 sec

 

 

 

 

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

C1_4

12,518

69%

 

12,662

1%

 

12,518

0%

 

12,283

-2%

 

12,133

-3%

C2_4

12,523

51%

 

12,697

1%

 

12,343

-1%

 

12,252

-2%

 

12,218

-2%

R1_4

14,414

45%

 

14,424

0%

 

13,877

-4%

 

13,579

-6%

 

13,524

-6%

R2_4

14,490

49%

 

14,638

1%

 

14,105

-3%

 

13,985

-3%

 

13,916

-4%

RC1_4

14,034

55%

 

13,811

-2%

 

13,501

-4%

 

13,211

-6%

 

13,026

-7%

Avg.

13,596

54%

 

13,646

0%

 

13,269

-2%

 

13,062

-4%

 

12,963

-5%

Table 12. Comparative Analysis of MIP and ALNS Solution Approaches on 200 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 3 Hours of Runtime) at ALNS Runtimes of 30, 60, 90, and 120 Seconds.

Instance

MIP

3 hrs.

 

 

ALNS

30 sec

 

 

60 sec

 

ALNS

90 sec

 

ALNS

120 sec

 

 

 

 

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

C1_2

6,757

70%

 

6,834

1%

 

6,679

-1%

 

6,555

-3%

 

6,489

-4%

C2_2

6,998

57%

 

6,734

-4%

 

6,667

-5%

 

6,626

-5%

 

6,583

-6%

R1_2

7,735

55%

 

7,054

-9%

 

6,973

-10%

 

6,944

-10%

 

6,917

-11%

R2_2

7,368

52%

 

7,112

-3%

 

6,962

-6%

 

6,911

-6%

 

6,887

-7%

RC1_2

6,725

55%

 

6,779

1%

 

6,617

-2%

 

6,593

-2%

 

6,567

-2%

Avg.

7,117

56%

 

6,903

-3%

 

6,780

-5%

 

6,726

-5%

 

6,689

-6%

 

 

 

 

Table 13. Comparative Analysis of MIP and ALNS Solution Approaches on 100 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 3 Hours of Runtime) at ALNS Runtimes of 10, 20, 30, and 40 Seconds.

Instance

MIP

3 hrs.

 

 

ALNS

10 sec

 

ALNS

20 sec

 

ALNS

30 sec

 

ALNS

40 sec

 

 

 

 

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

C101

2,390

63%

 

2,466

3%

 

2,405

1%

 

2,403

1%

 

2,403

1%

C201

2,615

55%

 

2,810

7%

 

2,757

5%

 

2,720

4%

 

2,714

4%

R101

2,274

37%

 

2,349

3%

 

2,320

2%

 

2,284

0%

 

2,284

0%

R201

2,192

37%

 

2,204

1%

 

2,182

0%

 

2,173

-1%

 

2,167

-1%

Rc101

2,904

53%

 

3,150

8%

 

3,090

6%

 

3,054

5%

 

3,021

4%

Avg.

2,475

49%

 

2,596

5%

 

2,551

3%

 

2,527

2%

 

2,518

2%

To Conclude, the results of objective function values were noticeably improved when solving the problem by ALNS for 120 seconds runtime compared with the MIP model (with a runtime of 3 hours) by 25%, 30%, 7%, 5%, and 6% for 1000, 800, 600, 400, and 200 customers, respectively.

Table 14. Comparative Analysis of MIP and ALNS Solution Approaches on 50 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 0.5 Hour of Runtime) at ALNS Runtimes of 5, 10, 15, and 20 Seconds.

Instance

MIP

0.5 hr.

 

ALNS

5 sec

 

ALNS

10 sec

 

ALNS

15 sec

 

ALNS

20 sec

 

 

 

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

 

Sol

Gap

r50_5_1

180

29%

 

178

-1%

 

177

-2%

 

177

-2%

 

177

-2%

r50_5_2

185

31%

 

185

0%

 

185

0%

 

185

0%

 

184

0%

r50_5_3

188

34%

 

189

0%

 

188

0%

 

188

0%

 

188

0%

r50_5_4

187

36%

 

184

-2%

 

184

-2%

 

183

-2%

 

183

-2%

r50_5_5

197

37%

 

195

-1%

 

195

-1%

 

193

-2%

 

193

-2%

Avg.

187

33%

 

186

-1%

 

186

-1%

 

185

-1%

 

185

-1%

 

 

 

Detailed results for the cases of 8, 10 and 12 customers were presented in the form of table as the following:

  • Validation of the ALNS

To validate the ALNS, small instances, with 8, 10, and 12 customers, were solved by the MIP model and ALNS algorithm. The ALNS was iterated 20 times. Figure 7 presents an example of the solution obtained by the two methods for an instance of ten customers and two SDLs instance. In Table 4: the ALNS got the optimum objective function value in all runs of 8 customer instances and 18 and 17 times when the number of customers was 10 and 12, respectively. Although, in a few iterations for 10 and 12 customers, the ALNS did not get the optimum solution to the problem, the solution obtained was of very high quality, differing, in each case, from the optimal solution in the assignment of a single customer.

Table 4: Comparison of the ALNS and MIP solution approaches applied to 8, 10, and 12 customers.

Number of Customers

Radar Chart 0f 20 iterations

ALNS Run Times

MIP Run Times

Objective Function Values

8

0.2%

 

10^ -5 seconds

0.4 seconds

429

10

0.02 seconds

0.7 seconds

490

12

0.9   seconds

14 seconds

578

 

 

Author Response File: Author Response.pdf

Reviewer 3 Report

The research studied a Capacitated Vehicle Routing Problem with Parcel Lockers using adaptive large neighborhood search metaheuristic. My comments have been brought here as follows:

First, the author needs to provide a table summarizing the previous studies that has been published. 

Second, the contribution and research gap of the study is not cleat for me in general. The author should address it as well. 

Third, equation 8 is a non-linear term. I was wondering how the author resolve that. I think the author should explain it in detail. 

 

The author needs to polish the research study before being published.

Author Response

Thank you for your considerate evaluation of our work. Your feedback holds immense value to us. Here's our outlined strategy for integrating your feedback:

First, the author needs to provide a table summarizing the previous studies that has been published. 

The table was added to the paper as the following:

Table 1 summarizes various literature papers with different solution approaches, the maximum number of customers, and their average runtimes in each paper. Besides, in the last column was mentioned if the parcel size of the customer demand is considered or not.

Table 1. Classification of Reviewed Papers.

Papers

Problem Variants

Solution Approach

Instance Size

Avg. Runtimes in Sec.

Parcel Size

Doerner et al. [16]

VRPMTW

Branch and Cut and Price

50

3600

?

Favaretto et al. [17]

VRPMTW

Ant Colony

71

250

?

Belhaiza et al. [18]

VRPMTW

Hybrid Variable Neighborhood Tabu Search

200

90

?

G. Ghiani et al. [21]

VRPRDL

Variable Neighborhood Search

120

-----

?

Ozbaygin et al. [22]

VRPRDL

Branch And Price

120

720,000

?

Ozbaygin et al. [23]

VRPRDL

Branch And Price

60

68

?

Baldacci et al. [24]

VRPTF

constructive heuristic

Lagrangian heuristic

150

229

?

Alcaraz et al. [25]

VRPDO

Mixed Integer Programing

20

1,800

?

Dumez et al. [12]

VRPDO

Large Neighborhood Search

400

2000

?

Mancini et al. [2]

VRPDO

Large Neighborhood Search Iterated Local Search

75

283

772

?

This paper

 

Adaptive Large Neighborhood Search

1000

120

?

             

 

 

 

Second, the contribution and research gap of the study is not cleat for me in general. The author should address it as well. 

The research contribution has been rewritten to be clearer as the following: “This paper presents various noteworthy contributions. First, it addresses a gap within the current literature since existing studies on the vehicle routing problem in the context of delivery options often either disregard vehicle capacity constraints entirely or overly simplify them, reducing them to the maximal count of customers that a vehicle can accommodate. In contrast, this paper introduces a more nuanced treatment of vehicle capacity constraints, encompassing the customer’s request size, quantified by the number of units (demand size) within each request. Moreover, the paper incorporates consideration of SDL capacity, denoting the count of utilized lockers within a SDL, allowing for synchronization among SDLs capacity and vehicle capacity. Besides, the paper provides an efficient implementation of the Adaptive Large Neighborhood Search (ALNS) which was tailored to solve the problem in question. An innovative solution representation is introduced. Furthermore, several repair and destroy operators have been incorporated into the framework's implementation. While most existing literature primarily focuses on problem instances with up to 400 customers, this paper extends the scope by considering instances with up to 1000 customers.”.

 

 

Third, equation 8 is a non-linear term. I was wondering how the author resolve that. I think the author should explain it in detail. 

I have carefully reconsidered the equation and rephrased it to enhance its clarity as a linear equation. I hope the revised presentation now effectively conveys its linear nature:

 

 

(8)

 

 

(9)

 

 

(10)

 

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The paper can be accepted.

Author Response

The authors extend their thanks for your review. Your insightful feedback has been invaluable in refining the quality of the work. Your expertise in the field has provided us with a fresh perspective and enabled us to address various aspects of the paper effectively. Thank you once again for your time and effort. Your feedback has played a crucial role in shaping the final version of the paper. Here's our outlined strategy for integrating your feedback:

Regarding the authors’ response: “In this paper, the authors have introduced a novel representation paradigm for addressing the vehicle routing problem in the context of delivery options. This representation framework is intentionally agnostic to any particular solution methodology, embodying a generalizable approach. Consequently, our attention is directed towards the identification of inaccuracies or errors intrinsic to this representation”. Routes’ representation is not problem description.

The section of Routes’ representation has been restructured to function as a discrete subsection within the ALNS heuristic approach, as delineated below:

 

Also, authors did not answer to the second part of my comment: “Also, why do you focus on only two erroneous cases in line 197 (I assume that there are more of these invalid solutions that should be discarded in the search procedures, e.g., one node in two routes or at two positions in a single route etc).”

Figure 3 illustration has been revised for enhanced clarity as the following:

In any variant of the VRP, solution representation is crucial to the success of search procedures and algorithms. Incorrect or invalid representations can lead to errors, inefficiencies, and incorrect results. Several types of errors must be avoided in the solution representation such as overlapping routes, missing visits, duplicate visits, ordering violations, among others. Such types of errors in the representation are not unique to the CVRPDO. They are rather global to all variants of the VRP, which makes checking for these errors a matter of routine in any such implementation.  While the proposed implementation is designed to avoid all types of erroneous representations, in writing this paper, the choice was made to focuses solely on detailing the errors uniquely associated with the proposed new solution representation of the CVRPDO to alert to their presence.  Consequently, Error! Reference source not found. is deployed as an illustrative paradigm, elucidating the instantiation of two erroneous solutions denoted as S1 and S3, juxtaposed with their respective rectifications embodied in solutions S2 and S4.Solution S1 is considered incorrect as it includes empty routes, which must be removed from the solution set. Similarly, solution S3 is deemed erroneous since SDL (10) remains unvisited by any customers. Consequently, SDL (10) should not be considered in any route within the solution.

 

Regarding the authors’ response to the questions of proposed research originality, as well as the scientific novelty and contribution, I must disagree with it. Especially regarding the heuristic approach where authors’ claim to introduce “an innovative solution representation suitable to the problem in question”. There is nothing innovative in the solution representation, at least I cannot se that. Please explain what is so innovative in the proposed solution representation, and custom repair and destroy operators?

The solution representation section has been rewritten to be clearer as the following:

    • Solution Representation

The solution representation is a list of lists, as shown in Figure 1. These lists consist of two distinct groups: the first group of lists represents tours which contains home delivery customers and shared locations which have been assigned customers. The routes of these tours can be introduced by adding the depot at both the beginning and end of each tour. The second group of these lists illustrates details the customers who will pick up their parcels from the SDLs that have appeared in the first section of the solution representation. For example, in Figure 1,there are two routes: R1 (0, 1, 2, 3, 9, 0) and R2 (0, 10, 7, 8, 0). In this scenario, customers (4, 5) are assigned to pick up their requests from the SDL (9), whereas customer (6) is responsible for picking up their parcels from the SDL (10).

To the best of the author's knowledge, this compact representation approach has yet to be expounded upon in the literature. Indeed, this representation deals with the second group, ShD and pickup customers, as routes with special characteristics. Consequently, the VRPDO can be considered the same way as the classic VRP. This novel approach has enabled the solution of problem instances encompassing up to 1,000 customers, a notable advancement beyond what is found in the literature, in which, problem instances featuring only up to 400 customers have been solved.

 

Figure 1: Graphical representation of a solution (S)

With respect to custom repair and destroy operators, although the fundamental framework and aforementioned operators have been introduced within existing literature, the authors have tailored and employed these operators and the framework to address the particular problem in question. As a result, these elements are not delineated as independent contributions; rather, they constitute integral facets of the research undertaken within this paper.

Regarding the authors’ answer: “Roulette wheel operators work on selecting the most appropriate combination of operators (pairs of destroy and repair) based on various factors such as problem characteristics, solution quality, and exploration-exploitation balance.” The roulette wheel selection is of great importance to the proposed approach, and the authors’ provided a vague and inadequate explanation “based on various factors”. Please provide the exact calculations used in the roulette wheel.

This section is rewritten with more details about the roulette wheel framework as the following:

    • Operator Selection Scheme

The Operator Selection Scheme refers to a strategy or mechanism used in metaheuristic algorithms to determine which operators to apply at each iteration or stage of the search process. It involves selecting the most appropriate combination of operators based on various factors such as problem characteristics, solution quality, and exploration-exploitation balance. In this paper, the implemented selection scheme was the roulette wheel selection. It is a probabilistic method for selecting operators based on their fitness or performance metrics.

The Roulette wheel scheme updates operator weights as a convex combination of the current weight, and the new score. When the algorithm starts, all operators  are assigned weight . In each iteration, a destroy and repair operator  and  are selected by the ALNS algorithm, based on the current weights . These operators are applied to the current solution, resulting in a new candidate solution. This candidate is evaluated by the ALNS algorithm, which leads to one of four outcomes:

  • The candidate solution is a new global best.
  • The candidate solution is better than the current solution, but not a global best.
  • The candidate solution is accepted.
  • The candidate solution is rejected.

Each of these four outcomes is assigned a score  (with  ). After observing outcome , the weights of the destroy and repair operator  and  that were applied are updated as follows:

 

where  (known as the operator decay rate) is a parameter.

 

 

 

 

 

 

 

 

Regarding the Gap column for MIP model in Tables 9-14, what does this Gap stands for? Please, give the explanation in the submission.

The illustration of the gaps mentioned in the text are rewritten as the following:

    • Testing the performance of the ALNS aginst Extended MIP Runs

To prove the ALNS algorithm quality, large instances of (1000 customers) were solved by the MIP model with five hours runtimes and compared with the heuristic solutions at 2, 5, and 10 minutes (average of 5 runs are considered). Gaps between the two solutions’ methods were reported in Error! Reference source not found.. The detailed results for each instance are reported in Error! Reference source not found. in Appendix A. Although it is not practical to run the MIP model for this considerable time, and it will occupy a massive memory of the RAM, this is done to validate the algorithm's performance.

Each line in the tables corresponds to the average result for one type of instance. Instance groups are listed in column 1. Columns 2, 3 encapsulate the results pertaining to the solution derived from the MIP model and the optimality gap (BKB Gap). Moreover, the fourth, sixth, eighth, and tenth columns provide the average cost associated with the ALNS across varying runtimes. Furthermore, the fifth, seventh, ninth, and eleventh columns present the average gap between the ALNS algorithm's outcomes and those provided by the MIP model's solutions (BKS Gap). Indeed, negative BKS gap means the ALNS solution is better than the one obtained by MIP the model.

Table 1. Comparison of the MIP and ALNS solution approaches applied to 1000 customer instances. For MIP, the found solutions after five hours of runtime were reported. For ALNS, the found solutions after (2,5,10 minutes) and the percentage gap to the best solution found by MIP at these runtimes were reported.

Instance

MIP

5 hrs.

 

 

ALNS

1 min.

 

ALNS

2 min.

 

ALNS

5 min.

 

ALNS

10 mins.

 

 

 

 

 

Sol

BKB* Gap

 

Sol

BKS** Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

C1_10

37,811

72%

 

39,268

4%

 

37,751

0%

 

36,287

-4%

 

35,624

-6%

C2_10

39,753

54%

 

41,483

4%

 

38,968

-2%

 

36,912

-7%

 

35,648

-10%

R1_10

54,370

47%

 

47,738

-12%

 

47,017

-14%

 

45,731

-16%

 

44,526

-18%

R2_10

52,413

45%

 

48,730

-7%

 

47,272

-10%

 

45,443

-13%

 

44,340

-15%

RC1_10

53,400

57%

 

45,671

-14%

 

44,642

-16%

 

43,020

-19%

 

42,127

-21%

Avg.

47,549

55%

 

44,578

-5%

 

43,130

-8%

 

41,478

-12%

 

40,453

-14%

BKB*

BKS**

Best Known Bound Obtained by the 3-hour run of the MIP.

Best Known Solutions Obtained by the 3-hour run of the MIP.

                               

 

 

 

Please give a comment of the average Gap for MIP across different size of instances. It seems that these results are not stable (with 3 hrs, for instances with 1000, 800, 600, 400, 200, and 100 customers, the MIP model respectively obtained average Gaps of 63%, 64%, 49%, 54%, 56%, 49%). Also, the same inconsistency of results is reported for average ALNS (120 sec) Gap (-25%, -30%, -7%, -5%, -6%, 2%). Why is this happening? Did you test the sensitivity of ALNS regarding the total number of iterations for solving each instance?

The number of customers is not the only factor that defines the complexity of the problem. As a result,
We might find that instances with a higher number of customers achieve better results (smaller gap to optimality) than instances with a lower number. This phenomenon can occur due to various factors inherent in the MIP optimization process or in heuristic solution and the characteristics of the problem instances.

Here are a few reasons why this might happen:

Problem Complexity: The complexity of the VRP can vary greatly depending on factors like the number of nodes, vehicles, and constraints. Smaller instances might have specific characteristics that make them harder to solve optimally within a limited runtime.

Solution Space: The structure of the solution space can differ between smaller and larger instances. It's possible that certain instance characteristics make it easier for the solver to find good solutions quickly in larger instances.

Heuristics and Relaxations: MIP solvers often use heuristics and relaxation techniques to efficiently search for solutions. These techniques might behave differently for smaller and larger instances, leading to unexpected gaps.

Initial Solution Quality: The quality of the initial solution provided to the solver can impact the solver's performance. Larger instances might benefit from a better initial solution, which can help guide the solver towards better solutions.

 

 

To illustrate this part in the paper, this comment is added in the conclusion:

From the results it is found that the instance size is not the only factor that defines the complexity of the problem. As a result, instances with a higher number of customers may achieve better results (smaller gap to optimality) than instances with a lower number. Other factors may play a role in increasing the complexity of the problem such as the structure of the solution space, heuristics and relaxation techniques that is used by the solver to efficiently search for solutions, and the quality of the initial solution.

 

 

 

In this paper: “Maximum coverage capacitated facility location problem with range constrained drones”, which published in 2019 in Transportation Research Part C, the results show that there is no direct relation between the instance size and the gap to optimality. Also, there is no direct relation between the instance size and the runtimes, as shown in the following figures.

 

 

 

Why is the ALNS performing badly for the case of 100 customers (why did you decrease the ALNS available computational time to 40 sec, while you kept 3 hrs. for MIP model)?

And why did you decrease available computational time to 0.5 hrs for 50 Customer Instances?

The authors acknowledge your perspective. Changing the computational times may be confusing. The tables are represented with 3 hours MIP runtimes and (30, 60, 90, and 120) seconds ALNS runtimes. For the case with 50 customers, calculations were done twice: once for 3 hours using MIP, and once for 0.5 hours. This repetition helps illustrate that even though the heuristic method is not as good as the 3-hour MIP model, it still performs better than when the model is run for only 0.5 hour.

Table 3 presents the results of the 50 customer instances with MIP runtimes 3 hrs., while in Table 4 the runtimes is 0.5 hour for the same instances. Across the 50 customer instances, the ALNS heuristics demonstrate an average deterioration of 10% in comparison to the MIP model with 3 hours runtimes. For 0.5 hr. MIP runtimes, the ALNS heuristics slightly improved the solutions obtained by the MIP model by an average of 1% within less than two minutes.

Table 2. Comparative Analysis of MIP and ALNS Solution Approaches on 100 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 3 Hours of Runtime) at ALNS Runtimes of 30, 60, 90, and 120 Seconds.

Instance

MIP

3 hrs.

 

 

ALNS

30 sec

 

ALNS

60 sec

 

ALNS

90 sec

 

ALNS

120 sec

 

 

 

 

 

Sol

BKB Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

C101

2,390

63%

 

2403

1%

 

2403

1%

 

2389

0%

 

2389

0%

C201

2,615

55%

 

2720

4%

 

2717

4%

 

2637

1%

 

2636

1%

R101

2,274

37%

 

2284

0%

 

2284

0%

 

2245

-1%

 

2183

-4%

R201

2,192

37%

 

2173

-1%

 

2165

-1%

 

2165

-1%

 

2106

-4%

Rc101

2,904

53%

 

3054

5%

 

3045

5%

 

2907

0%

 

2907

0%

Avg.

2,475

49%

 

2527

2%

 

2523

2%

 

2469

0%

 

2444

-1%

Table 3. Comparative Analysis of MIP and ALNS Solution Approaches on 50 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 3 Hours of Runtime) at ALNS Runtimes of 30, 60, 90, and 120 Seconds.

Instance

MIP

3 hrs.

 

ALNS

30 sec

 

ALNS

60 sec

 

ALNS

90 sec

 

ALNS

120 sec

 

 

 

 

Sol

BKB Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

r50_5_1

168

28%

 

177

5%

 

177

5%

 

177

5%

 

177

5%

r50_5_2

139

30%

 

185

33%

 

184

32%

 

184

32%

 

184

32%

r50_5_3

150

31%

 

188

25%

 

188

25%

 

188

25%

 

188

25%

r50_5_4

196

34%

 

194

-1%

 

184

-6%

 

184

-6%

 

183

-7%

r50_5_5

184

33%

 

185

1%

 

185

1%

 

174

-5%

 

174

-5%

Avg.

167

31%

 

186

13%

 

184

12%

 

181

10%

 

181

10%

Table 4Comparative Analysis of MIP and ALNS Solution Approaches on 50 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 0.5 hour of Runtime) at ALNS Runtimes of 30, 60, 90, and 120 Seconds.

Instance

MIP

0.5 hr.

 

ALNS

30 sec

 

ALNS

60 sec

 

ALNS

90 sec

 

ALNS

120 sec

 

 

 

 

Sol

BKB Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

r50_5_1

180

29%

 

177

-2%

 

177

-2%

 

177

-2%

 

177

-2%

r50_5_2

185

31%

 

185

0%

 

184

-1%

 

184

-1%

 

184

-1%

r50_5_3

188

34%

 

188

0%

 

188

0%

 

188

0%

 

188

0%

r50_5_4

187

36%

 

194

4%

 

184

-2%

 

184

-2%

 

183

-2%

r50_5_5

197

37%

 

185

-6%

 

185

-6%

 

174

-12%

 

174

-12%

Avg.

187

33%

 

186

-1%

 

184

-2%

 

181

-3%

 

181

-3%

 

Author Response File: Author Response.pdf

Reviewer 2 Report

The authors did improve certain segments of the submission, but there are still some important issues mainly regarding the scientific contribution of the presented models and research, as well as in the results section. The detailed remarks are given in the following. I hope that these comments can help authors to improve their research.

Regarding the authors’ response: “In this paper, the authors have introduced a novel representation paradigm for addressing the vehicle routing problem in the context of delivery options. This representation framework is intentionally agnostic to any particular solution methodology, embodying a generalizable approach. Consequently, our attention is directed towards the identification of inaccuracies or errors intrinsic to this representation”. Routes’ representation is not problem description. Therefore, this response is inadequate. Also, authors did not answer to the second part of my comment: “Also, why do you focus on only two erroneous cases in line 197 (I assume that there are more of these invalid solutions that should be discarded in the search procedures, e.g., one node in two routes or at two positions in a single route etc).”

Regarding the authors’ response to the questions of proposed research originality, as well as the scientific novelty and contribution, I must disagree with it. Especially regarding the heuristic approach where authors’ claim to introduce “an innovative solution representation suitable to the problem in question”. There is nothing innovative in the solution representation, at least I cannot se that. Please explain what is so innovative in the proposed solution representation, and custom repair and destroy operators?

Regarding the authors’ answer: “Roulette wheel operators work on selecting the most appropriate combination of operators (pairs of destroy and repair) based on various factors such as problem characteristics, solution quality, and exploration-exploitation balance.” The roulette wheel selection is of great importance to the proposed approach, and the authors’ provided a vague and inadequate explanation “based on various factors”. Please provide the exact calculations used in the roulette wheel.

Regarding the Gap column for MIP model in Tables 9-14, what does this Gap stands for? Please, give the explanation in the submission. Please give a comment of the average Gap for MIP across different size of instances. It seems that these results are not stable (with 3 hrs, for instances with 1000, 800, 600, 400, 200, and 100 customers, the MIP model respectively obtained average Gaps of 63%, 64%, 49%, 54%, 56%, 49%). Also, the same inconsistency of results is reported for average ALNS (120 sec) Gap (-25%, -30%, -7%, -5%, -6%, 2%). Why is this happening? Did you test the sensitivity of ALNS regarding the total number of iterations for solving each instance? Why is the ALNS performing badly for the case of 100 customers (why did you decrease the ALNS available computational time to 40 sec, while you kept 3 hrs. for MIP model)? And why did you decrease available computational time to 0.5 hrs for 50 Customer Instances? There is lot of “inconsistencies” in the Results segment of the submission.

Author Response

The authors extend their thanks for your review. Your insightful feedback has been invaluable in refining the quality of the work. Your expertise in the field has provided us with a fresh perspective and enabled us to address various aspects of the paper effectively. Thank you once again for your time and effort. Your feedback has played a crucial role in shaping the final version of the paper. Here's our outlined strategy for integrating your feedback:

Regarding the authors’ response: “In this paper, the authors have introduced a novel representation paradigm for addressing the vehicle routing problem in the context of delivery options. This representation framework is intentionally agnostic to any particular solution methodology, embodying a generalizable approach. Consequently, our attention is directed towards the identification of inaccuracies or errors intrinsic to this representation”. Routes’ representation is not problem description.

The section of Routes’ representation has been restructured to function as a discrete subsection within the ALNS heuristic approach, as delineated below:

 

Also, authors did not answer to the second part of my comment: “Also, why do you focus on only two erroneous cases in line 197 (I assume that there are more of these invalid solutions that should be discarded in the search procedures, e.g., one node in two routes or at two positions in a single route etc).”

Figure 3 illustration has been revised for enhanced clarity as the following:

In any variant of the VRP, solution representation is crucial to the success of search procedures and algorithms. Incorrect or invalid representations can lead to errors, inefficiencies, and incorrect results. Several types of errors must be avoided in the solution representation such as overlapping routes, missing visits, duplicate visits, ordering violations, among others. Such types of errors in the representation are not unique to the CVRPDO. They are rather global to all variants of the VRP, which makes checking for these errors a matter of routine in any such implementation.  While the proposed implementation is designed to avoid all types of erroneous representations, in writing this paper, the choice was made to focuses solely on detailing the errors uniquely associated with the proposed new solution representation of the CVRPDO to alert to their presence.  Consequently, Error! Reference source not found. is deployed as an illustrative paradigm, elucidating the instantiation of two erroneous solutions denoted as S1 and S3, juxtaposed with their respective rectifications embodied in solutions S2 and S4.Solution S1 is considered incorrect as it includes empty routes, which must be removed from the solution set. Similarly, solution S3 is deemed erroneous since SDL (10) remains unvisited by any customers. Consequently, SDL (10) should not be considered in any route within the solution.

 

Regarding the authors’ response to the questions of proposed research originality, as well as the scientific novelty and contribution, I must disagree with it. Especially regarding the heuristic approach where authors’ claim to introduce “an innovative solution representation suitable to the problem in question”. There is nothing innovative in the solution representation, at least I cannot se that. Please explain what is so innovative in the proposed solution representation, and custom repair and destroy operators?

The solution representation section has been rewritten to be clearer as the following:

    • Solution Representation

The solution representation is a list of lists, as shown in Figure 1. These lists consist of two distinct groups: the first group of lists represents tours which contains home delivery customers and shared locations which have been assigned customers. The routes of these tours can be introduced by adding the depot at both the beginning and end of each tour. The second group of these lists illustrates details the customers who will pick up their parcels from the SDLs that have appeared in the first section of the solution representation. For example, in Figure 1,there are two routes: R1 (0, 1, 2, 3, 9, 0) and R2 (0, 10, 7, 8, 0). In this scenario, customers (4, 5) are assigned to pick up their requests from the SDL (9), whereas customer (6) is responsible for picking up their parcels from the SDL (10).

To the best of the author's knowledge, this compact representation approach has yet to be expounded upon in the literature. Indeed, this representation deals with the second group, ShD and pickup customers, as routes with special characteristics. Consequently, the VRPDO can be considered the same way as the classic VRP. This novel approach has enabled the solution of problem instances encompassing up to 1,000 customers, a notable advancement beyond what is found in the literature, in which, problem instances featuring only up to 400 customers have been solved.

 

Figure 1: Graphical representation of a solution (S)

With respect to custom repair and destroy operators, although the fundamental framework and aforementioned operators have been introduced within existing literature, the authors have tailored and employed these operators and the framework to address the particular problem in question. As a result, these elements are not delineated as independent contributions; rather, they constitute integral facets of the research undertaken within this paper.

Regarding the authors’ answer: “Roulette wheel operators work on selecting the most appropriate combination of operators (pairs of destroy and repair) based on various factors such as problem characteristics, solution quality, and exploration-exploitation balance.” The roulette wheel selection is of great importance to the proposed approach, and the authors’ provided a vague and inadequate explanation “based on various factors”. Please provide the exact calculations used in the roulette wheel.

This section is rewritten with more details about the roulette wheel framework as the following:

    • Operator Selection Scheme

The Operator Selection Scheme refers to a strategy or mechanism used in metaheuristic algorithms to determine which operators to apply at each iteration or stage of the search process. It involves selecting the most appropriate combination of operators based on various factors such as problem characteristics, solution quality, and exploration-exploitation balance. In this paper, the implemented selection scheme was the roulette wheel selection. It is a probabilistic method for selecting operators based on their fitness or performance metrics.

The Roulette wheel scheme updates operator weights as a convex combination of the current weight, and the new score. When the algorithm starts, all operators  are assigned weight . In each iteration, a destroy and repair operator  and  are selected by the ALNS algorithm, based on the current weights . These operators are applied to the current solution, resulting in a new candidate solution. This candidate is evaluated by the ALNS algorithm, which leads to one of four outcomes:

  • The candidate solution is a new global best.
  • The candidate solution is better than the current solution, but not a global best.
  • The candidate solution is accepted.
  • The candidate solution is rejected.

Each of these four outcomes is assigned a score  (with  ). After observing outcome , the weights of the destroy and repair operator  and  that were applied are updated as follows:

 

where  (known as the operator decay rate) is a parameter.

 

 

 

 

 

 

 

 

Regarding the Gap column for MIP model in Tables 9-14, what does this Gap stands for? Please, give the explanation in the submission.

The illustration of the gaps mentioned in the text are rewritten as the following:

    • Testing the performance of the ALNS aginst Extended MIP Runs

To prove the ALNS algorithm quality, large instances of (1000 customers) were solved by the MIP model with five hours runtimes and compared with the heuristic solutions at 2, 5, and 10 minutes (average of 5 runs are considered). Gaps between the two solutions’ methods were reported in Error! Reference source not found.. The detailed results for each instance are reported in Error! Reference source not found. in Appendix A. Although it is not practical to run the MIP model for this considerable time, and it will occupy a massive memory of the RAM, this is done to validate the algorithm's performance.

Each line in the tables corresponds to the average result for one type of instance. Instance groups are listed in column 1. Columns 2, 3 encapsulate the results pertaining to the solution derived from the MIP model and the optimality gap (BKB Gap). Moreover, the fourth, sixth, eighth, and tenth columns provide the average cost associated with the ALNS across varying runtimes. Furthermore, the fifth, seventh, ninth, and eleventh columns present the average gap between the ALNS algorithm's outcomes and those provided by the MIP model's solutions (BKS Gap). Indeed, negative BKS gap means the ALNS solution is better than the one obtained by MIP the model.

Table 1. Comparison of the MIP and ALNS solution approaches applied to 1000 customer instances. For MIP, the found solutions after five hours of runtime were reported. For ALNS, the found solutions after (2,5,10 minutes) and the percentage gap to the best solution found by MIP at these runtimes were reported.

Instance

MIP

5 hrs.

 

 

ALNS

1 min.

 

ALNS

2 min.

 

ALNS

5 min.

 

ALNS

10 mins.

 

 

 

 

 

Sol

BKB* Gap

 

Sol

BKS** Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

C1_10

37,811

72%

 

39,268

4%

 

37,751

0%

 

36,287

-4%

 

35,624

-6%

C2_10

39,753

54%

 

41,483

4%

 

38,968

-2%

 

36,912

-7%

 

35,648

-10%

R1_10

54,370

47%

 

47,738

-12%

 

47,017

-14%

 

45,731

-16%

 

44,526

-18%

R2_10

52,413

45%

 

48,730

-7%

 

47,272

-10%

 

45,443

-13%

 

44,340

-15%

RC1_10

53,400

57%

 

45,671

-14%

 

44,642

-16%

 

43,020

-19%

 

42,127

-21%

Avg.

47,549

55%

 

44,578

-5%

 

43,130

-8%

 

41,478

-12%

 

40,453

-14%

BKB*

BKS**

Best Known Bound Obtained by the 3-hour run of the MIP.

Best Known Solutions Obtained by the 3-hour run of the MIP.

                               

 

 

 

Please give a comment of the average Gap for MIP across different size of instances. It seems that these results are not stable (with 3 hrs, for instances with 1000, 800, 600, 400, 200, and 100 customers, the MIP model respectively obtained average Gaps of 63%, 64%, 49%, 54%, 56%, 49%). Also, the same inconsistency of results is reported for average ALNS (120 sec) Gap (-25%, -30%, -7%, -5%, -6%, 2%). Why is this happening? Did you test the sensitivity of ALNS regarding the total number of iterations for solving each instance?

The number of customers is not the only factor that defines the complexity of the problem. As a result,
We might find that instances with a higher number of customers achieve better results (smaller gap to optimality) than instances with a lower number. This phenomenon can occur due to various factors inherent in the MIP optimization process or in heuristic solution and the characteristics of the problem instances.

Here are a few reasons why this might happen:

Problem Complexity: The complexity of the VRP can vary greatly depending on factors like the number of nodes, vehicles, and constraints. Smaller instances might have specific characteristics that make them harder to solve optimally within a limited runtime.

Solution Space: The structure of the solution space can differ between smaller and larger instances. It's possible that certain instance characteristics make it easier for the solver to find good solutions quickly in larger instances.

Heuristics and Relaxations: MIP solvers often use heuristics and relaxation techniques to efficiently search for solutions. These techniques might behave differently for smaller and larger instances, leading to unexpected gaps.

Initial Solution Quality: The quality of the initial solution provided to the solver can impact the solver's performance. Larger instances might benefit from a better initial solution, which can help guide the solver towards better solutions.

 

 

To illustrate this part in the paper, this comment is added in the conclusion:

From the results it is found that the instance size is not the only factor that defines the complexity of the problem. As a result, instances with a higher number of customers may achieve better results (smaller gap to optimality) than instances with a lower number. Other factors may play a role in increasing the complexity of the problem such as the structure of the solution space, heuristics and relaxation techniques that is used by the solver to efficiently search for solutions, and the quality of the initial solution.

 

 

 

In this paper: “Maximum coverage capacitated facility location problem with range constrained drones”, which published in 2019 in Transportation Research Part C, the results show that there is no direct relation between the instance size and the gap to optimality. Also, there is no direct relation between the instance size and the runtimes, as shown in the following figures.

 

 

 

Why is the ALNS performing badly for the case of 100 customers (why did you decrease the ALNS available computational time to 40 sec, while you kept 3 hrs. for MIP model)?

And why did you decrease available computational time to 0.5 hrs for 50 Customer Instances?

The authors acknowledge your perspective. Changing the computational times may be confusing. The tables are represented with 3 hours MIP runtimes and (30, 60, 90, and 120) seconds ALNS runtimes. For the case with 50 customers, calculations were done twice: once for 3 hours using MIP, and once for 0.5 hours. This repetition helps illustrate that even though the heuristic method is not as good as the 3-hour MIP model, it still performs better than when the model is run for only 0.5 hour.

Table 3 presents the results of the 50 customer instances with MIP runtimes 3 hrs., while in Table 4 the runtimes is 0.5 hour for the same instances. Across the 50 customer instances, the ALNS heuristics demonstrate an average deterioration of 10% in comparison to the MIP model with 3 hours runtimes. For 0.5 hr. MIP runtimes, the ALNS heuristics slightly improved the solutions obtained by the MIP model by an average of 1% within less than two minutes.

Table 2. Comparative Analysis of MIP and ALNS Solution Approaches on 100 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 3 Hours of Runtime) at ALNS Runtimes of 30, 60, 90, and 120 Seconds.

Instance

MIP

3 hrs.

 

 

ALNS

30 sec

 

ALNS

60 sec

 

ALNS

90 sec

 

ALNS

120 sec

 

 

 

 

 

Sol

BKB Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

C101

2,390

63%

 

2403

1%

 

2403

1%

 

2389

0%

 

2389

0%

C201

2,615

55%

 

2720

4%

 

2717

4%

 

2637

1%

 

2636

1%

R101

2,274

37%

 

2284

0%

 

2284

0%

 

2245

-1%

 

2183

-4%

R201

2,192

37%

 

2173

-1%

 

2165

-1%

 

2165

-1%

 

2106

-4%

Rc101

2,904

53%

 

3054

5%

 

3045

5%

 

2907

0%

 

2907

0%

Avg.

2,475

49%

 

2527

2%

 

2523

2%

 

2469

0%

 

2444

-1%

Table 3. Comparative Analysis of MIP and ALNS Solution Approaches on 50 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 3 Hours of Runtime) at ALNS Runtimes of 30, 60, 90, and 120 Seconds.

Instance

MIP

3 hrs.

 

ALNS

30 sec

 

ALNS

60 sec

 

ALNS

90 sec

 

ALNS

120 sec

 

 

 

 

Sol

BKB Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

r50_5_1

168

28%

 

177

5%

 

177

5%

 

177

5%

 

177

5%

r50_5_2

139

30%

 

185

33%

 

184

32%

 

184

32%

 

184

32%

r50_5_3

150

31%

 

188

25%

 

188

25%

 

188

25%

 

188

25%

r50_5_4

196

34%

 

194

-1%

 

184

-6%

 

184

-6%

 

183

-7%

r50_5_5

184

33%

 

185

1%

 

185

1%

 

174

-5%

 

174

-5%

Avg.

167

31%

 

186

13%

 

184

12%

 

181

10%

 

181

10%

Table 4Comparative Analysis of MIP and ALNS Solution Approaches on 50 Customer Instances: Solution Outputs and Percentage Gap Relative to MIP's Solutions (after 0.5 hour of Runtime) at ALNS Runtimes of 30, 60, 90, and 120 Seconds.

Instance

MIP

0.5 hr.

 

ALNS

30 sec

 

ALNS

60 sec

 

ALNS

90 sec

 

ALNS

120 sec

 

 

 

 

Sol

BKB Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

 

Sol

BKS Gap

r50_5_1

180

29%

 

177

-2%

 

177

-2%

 

177

-2%

 

177

-2%

r50_5_2

185

31%

 

185

0%

 

184

-1%

 

184

-1%

 

184

-1%

r50_5_3

188

34%

 

188

0%

 

188

0%

 

188

0%

 

188

0%

r50_5_4

187

36%

 

194

4%

 

184

-2%

 

184

-2%

 

183

-2%

r50_5_5

197

37%

 

185

-6%

 

185

-6%

 

174

-12%

 

174

-12%

Avg.

187

33%

 

186

-1%

 

184

-2%

 

181

-3%

 

181

-3%

 

Author Response File: Author Response.pdf

Round 3

Reviewer 2 Report

The authors have improved the submission, all issues were adequately addressed. My recommendation is to accept the submission for publication.

Back to TopTop