Next Article in Journal
Construction of Supplemental Functions for Direct Serendipity and Mixed Finite Elements on Polygons
Previous Article in Journal
A Novel Fuzzy Unsupervised Quadratic Surface Support Vector Machine Based on DC Programming: An Application to Credit Risk Management
 
 
Article
Peer-Review Record

Dynamic Routing Policies for Multi-Skill Call Centers Using Deep Q Network

Mathematics 2023, 11(22), 4662; https://doi.org/10.3390/math11224662
by Qin Zhang
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Mathematics 2023, 11(22), 4662; https://doi.org/10.3390/math11224662
Submission received: 24 September 2023 / Revised: 10 November 2023 / Accepted: 14 November 2023 / Published: 16 November 2023

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

- The research problem is well motivated and sufficient references are provided in the introduction.
- The model is clearly formulated in section 2
- The Deep Q Network methodology used to solve the call center routing policy is solid and appropriate.

Comments:
- Add an itemize with the main contribution at the end of the Introduction, followed with a paragraph describing paper's structure.
- line 242 "This chapter presents" -> This section
- In section 4 it should be mentioned what numerical tool (R, matlab, python...) has been used.
- Providing the simulation code in a public repository will support the reproducibility of the numerical results.
- Mention what criteria has been used to choose the simulation parameters in section 5.
- For easy reading and reference, put all simulation parameters described in section 5 in table format.
- In line 294, explain why these policies have been chosen for comparison.
- Figure 5 is not clear. It is referred as the DRL algorithm. Does DRL refers to the proposed DQN? It should be clarified.
- The x-axis of Figure 5 shows steps, but in line 313 says "It can be seen that 20 episodes can well ensure the convergence". Why steps and not episodes in the x-axis? In steps convergence seems to occur around 50000 steps.
- To be more readable, x-axis in Figures 5 and 6 should be in thousands.
- The the routing optimization times (discussed in lines 324-) should be included in Table 1, for all policies under comparison.
- In lines 333- the authors say "We can adjust the penalty factor for each type of customer to achieve our specific goal...". The authors should mention whether such adjustment, or similar, could also be done with the other policies under comparison.
- In the conclusions the authors say "The DQN based dynamic routing policy performs better... is much faster". Some numerical value supporting these claims should be provided.
 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

Comments and Suggestions for Authors

Comments to the authors:

1. The Reviewer feel that the topic of this paper is not much relevant to that of the MATHEMATICS journal.

2. Main difference between this paper and the related works are not presented clearly.

3. It is too difficult for the Reviewer to evaluate main contribution of this paper. Main contribution of this paper should be highlighted.

4. The proposed algorithms in this paper are only extension of the existing ones.

5. The authors are recommended to add more analysis/performance evaluation for the proposed algorithms to enhance contribution of this paper.

6. More performance comparison with the existing methods should be added in this paper.

7. More results and discussion should be added to show more clearly the advantages of the proposed algorithms

8. There are many typos in this paper which need to be corrected:

- “MSC:” in the first page.

- All the contents of Table 1 should be placed at the same page

- Fig 5 should be corrected by Fig. 5. Please correct the similar typos

- Numbers of equations should be placed at the right margin

- Please check grammatical errors in this paper.

 

Comments on the Quality of English Language

Moderate editing of English language is required.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

Comments and Suggestions for Authors

In this article. Authors proposed Deep Q Network with reinforcement learning to address issues in dynamic routing for call centers having multiple skills types and agent groups. The paper is well presented with sufficient experimental/simulation analysis.

-       Please outline the clear contributions in the introduction section.

-       The fault handling and loss recovery with proper traffic shaping mechanism is missing. May be it has to incorporate message queuing mechanism for queue management to avoid possible loss and smoothing of the traffic flow.

-       Please carefully verify the formula and parameters/values assumed in the experiment.

Comments on the Quality of English Language

Careful proof reading is required with parameters/values provided

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

The Reviewer have no further comment.

Comments on the Quality of English Language

Minor editing of English language is required.

Back to TopTop