Next Article in Journal
Novel Design for Rotary Burner for Low-Quality Pellets
Previous Article in Journal
Toward a Multi-Column Knowledge-Oriented Neural Network for Web Corpus Causality Mining
 
 
Article
Peer-Review Record

Data-Intensive Inventory Forecasting with Artificial Intelligence Models for Cross-Border E-Commerce Service Automation

Appl. Sci. 2023, 13(5), 3051; https://doi.org/10.3390/app13053051
by Yuk Ming Tang 1,2,*, Ka Yin Chau 2, Yui-yip Lau 3,* and Zehang Zheng 1
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3:
Appl. Sci. 2023, 13(5), 3051; https://doi.org/10.3390/app13053051
Submission received: 3 February 2023 / Revised: 21 February 2023 / Accepted: 22 February 2023 / Published: 27 February 2023

Round 1

Reviewer 1 Report

The title of the article and the abstract describe the content well.

He would recommend including the word Model in the keywords.

The literature review reflects mostly current literature and does not deal as much with traditional authors, but that is not bad. A weaker shortcoming I see here is the failure to focus on the process areas that are integral to the subject - I would recommend adding a short paragraph on this topic, for example:

https://doi.org/10.3390/fi15010013

https://doi.org/10.3390/pr10030539

https://doi.org/10.3390/su15043120

The methodological part is well done. I would only welcome a broader company introduction when the study was carried out. Mainly to focus on its common elements and factors with other companies, but also on its specifics. There is no apparent reason for choosing this particular company. The data is structured logically and clearly, as well as the procedure.

To improve the quality of Figure 2

The discussion here is poorly developed and needs to be expanded upon, as well as more discussion of the conclusions with the other authors. The authors mention comparisons with other models but do not cite them.

I appreciate the mention of limits in conclusion.

 

Overall, I recommend the article for publication after incorporating the suggestions. 

Author Response

Reviewer 1

The title of the article and the abstract describe the content well.

Response: Thanks for the positive comments.

He would recommend including the word Model in the keywords.

Response: Thanks for the reviewer’s comment. We have added “Model” in the keyword.

The literature review reflects mostly current literature and does not deal as much with traditional authors, but that is not bad. A weaker shortcoming I see here is the failure to focus on the process areas that are integral to the subject - I would recommend adding a short paragraph on this topic, for example:

https://doi.org/10.3390/fi15010013

https://doi.org/10.3390/pr10030539

https://doi.org/10.3390/su15043120

Response: Thanks for the reviewer’s comments. We have used the above-suggested research articles to illustrate it in Section 2.3.

The methodological part is well done. I would only welcome a broader company introduction when the study was carried out. Mainly to focus on its common elements and factors with other companies, but also on its specifics. There is no apparent reason for choosing this particular company. The data is structured logically and clearly, as well as the procedure.

Response: Thanks for the reviewer’s comment. We have added such limitation in Section 6, paragraph 2.

To improve the quality of Figure 2

Response: Thanks for the reviewer’s comments. The figure is revised.

The discussion here is poorly developed and needs to be expanded upon, as well as more discussion of the conclusions with the other authors. The authors mention comparisons with other models but do not cite them.

I appreciate the mention of limits in conclusion.

Response: Thanks for the reviewer’s positive comment.

Overall, I recommend the article for publication after incorporating the suggestions. 

Response: Thanks for the reviewer’s positive comment.

Author Response File: Author Response.docx

Reviewer 2 Report

The research background is well described, and the techniques used in the research methods are well articulated. I have some concerns:

Detected a copy-paste paragraph from this master thesis: http://lib.buet.ac.bd:8080/xmlui/bitstream/handle/123456789/5332/Full%20Thesis.pdf?isAllowed=y&sequence=1

from the thesis, the paragraph is on page 16 (with a citation number [6] in the thesis. While on the paper, it is from lines 40 to 43 in the introduction. I hope the Authors did not do that on purpose and just forgot to cite this source.

I would like to know how the authors perform their machine-learning models' validation and cross-validation steps.

Is the code (Python, R or Matlab) available? I think it is crucial for reproducibility.

The figures need some improvements.

The mathematical equation should be revised.

 

 

Author Response

Reviewer 2

The research background is well described, and the techniques used in the research methods are well articulated. I have some concerns:

Detected a copy-paste paragraph from this master thesis: http://lib.buet.ac.bd:8080/xmlui/bitstream/handle/123456789/5332/Full%20Thesis.pdf?isAllowed=y&sequence=1

from the thesis, the paragraph is on page 16 (with a citation number [6] in the thesis. While on the paper, it is from lines 40 to 43 in the introduction. I hope the Authors did not do that on purpose and just forgot to cite this source.

Response: Thanks for the reviewer’s comments. We have removed the sentences.

I would like to know how the authors perform their machine-learning models' validation and cross-validation steps.

Response: Thanks for the reviewer’s comments. 70% of the data set is used as the training set to train model parameters, 25% of the data set is used as the validation set to validate the model and adjust and optimize parameters, and 5% of the data was used as the testing set to test the accuracy of the model's prediction.

Is the code (Python, R or Matlab) available? I think it is crucial for reproducibility.

Response: Thanks for the reviewer’s comments. Python is used and the coding is shown as follows.

  • Data Process
  • Feature Prepossess and Extraction

The figures need some improvements.

Response: Thanks for the reviewer’s comments. The figure is revised.

The mathematical equation should be revised.

Response: Thanks for the reviewer’s comments. The equations are double checked.

Author Response File: Author Response.docx

Reviewer 3 Report

Manuscript ID: applsci-2230283

Comments:

This study used AI models to study intensive inventory data collected from an e-commerce company. The experimental results demonstrated that the XGBoost method has the superior performance. However, there are some major concerns on the paper as follows:

1. The abstract lacks details and information about the scientific contribution of the work, please rewrite it and provide more details on the scientific contribution of the work and numerical comparisons.

2. How can raw data be converted into the input matrix or sequence data of the proposed model? The authors are suggested to explain it clearly.

3. What is the length of window size to capture the input data to extract features? How to divide the training set and test set?

4. Only using RMSE as an evaluation index cannot comprehensively evaluate the performance of the proposed model , and it is recommended to add MAPE.

5. The hyper-parameter selection of the models and baselines used in the paper is unclear. How are the hyper-parameters determined in this work? Please add better validation or cross-validation figures and tables.

6. What is the difference between Feature Extraction and Feature Selection ( Fig. 1) in the data processing process?

7. Please enhance the experimental results by providing a comprehensive comparison with other SOTA methods including deep neural architectures that were published.

8. In order to demonstrate the generalization of the proposed model, please use other data to check it.

Comments for author File: Comments.pdf

Author Response

Reviewer 3

This study used AI models to study intensive inventory data collected from an e-commerce company. The experimental results demonstrated that the XGBoost method has the superior performance. However, there are some major concerns on the paper as follows:

  1. The abstract lacks details and information about the scientific contribution of the work, please rewrite it and provide more details on the scientific contribution of the work and numerical comparisons.

Response: Thanks for the reviewer’s comments. We have provided the scientific contribution in the abstract.

  1. How can raw data be converted into the input matrix or sequence data of the proposed model? The authors are suggested to explain it clearly.

Response: Thanks for the reviewer’s comments. The raw data consists of several data files including goods and SKU relation, inventory data, sales data, etc. the data are combined into the matrix format using the Python. To handle the data set, we propose it should be divided into a number of data series and fields. In this investigation, the data types are classified into nine data series.

  1. What is the length of window size to capture the input data to extract features? How to divide the training set and test set?

Response: Thanks for the reviewer’s comments. 70% of the data set is used as the training set to train model parameters, 25% of the data set is used as the validation set to validate the model and adjust and optimize parameters, and 5% of the data was used as the testing set to test the accuracy of the model's prediction.

  1. Only using RMSE as an evaluation index cannot comprehensively evaluate the performance of the proposed model, and it is recommended to add MAPE.

Response: Thanks for the reviewer’s comments. It is elaborated in the discussion section.

  1. The hyper-parameter selection of the models and baselines used in the paper is unclear. How are the hyper-parameters determined in this work? Please add better validation or cross-validation figures and tables.

Response: Thanks for the reviewer’s comments. The parameter settings for each AI model are summarized in Table 4, determined based on the results of the lowest RMSE.

 

  1. What is the difference between Feature Extraction and Feature Selection ( Fig. 1) in the data processing process?

Response: Thanks for the reviewer’s comments. Feature extraction is to extract feature fields from the original data and convert the data into a specific format, while the feature selection screen out the better features sets in order to achieve better performance of the model.

  1. Please enhance the experimental results by providing a comprehensive comparison with other SOTA methods including deep neural architectures that were published. In order to demonstrate the generalization of the proposed model, please use other data to check it.

Response: Thanks for the reviewer’s comments. It is elaborated in the discussion section.

Author Response File: Author Response.docx

Round 2

Reviewer 3 Report

1. The study is much improved and can be easily understood by users.

2. More parameters of XGBoost needs to be stated, e.g., learning rate, max_depth, which are critical to prevent overfitting.

3. The authors say that feature selection filters out some better features. However, what is the screening method? How to judge the better features? Please state these in the revised manuscript.

4. With regard to the first reviewer's other points raised last time, although some of author's replies did not meet my expectations, I reluctantly accepted these explanations.

5. The positions of some labels of Equations are incorrect. Please check it.

Author Response

  1. The study is much improved and can be easily understood by users.

Response: Thanks for the reviewer’s positive comments.

  1. More parameters of XGBoost needs to be stated, e.g., learning rate, max_depth, which are critical to prevent overfitting.

Response: Thanks for the reviewer’s comments. We have provided the parameters of XGBoost in Table 4.

  1. The authors say that feature selection filters out some better features. However, what is the screening method? How to judge the better features? Please state these in the revised manuscript.

Response: Thanks for the reviewer’s comments. We have provided the elaboration in Section 3.1, paragraph 2.

  1. With regard to the first reviewer's other points raised last time, although some of author's replies did not meet my expectations, I reluctantly accepted these explanations.

Response: Thanks for the reviewer’s positive comments.

  1. The positions of some labels of Equations are incorrect. Please check it.

Response: Thanks for the reviewer’s comments. We have corrected the positions of some labels of equations.

Author Response File: Author Response.docx

Back to TopTop