Next Article in Journal
A Survey on LiDAR Scanning Mechanisms
Previous Article in Journal
On the Trade-Off between the Main Parameters of Planar Antenna Arrays
 
 
Article
Peer-Review Record

Performance Analysis of a Clustering Model for QoS-Aware Service Recommendation

Electronics 2020, 9(5), 740; https://doi.org/10.3390/electronics9050740
by Fei Ding 1,2,*, Tao Wen 1,2, Suju Ren 1,2 and Jianmin Bao 1,2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Electronics 2020, 9(5), 740; https://doi.org/10.3390/electronics9050740
Submission received: 1 March 2020 / Revised: 23 April 2020 / Accepted: 23 April 2020 / Published: 30 April 2020
(This article belongs to the Section Computer Science & Engineering)

Round 1

Reviewer 1 Report

Authors are dealing with the interesting issue of personalized recommendation of Web services and propose a new QoS-aware Web service recommendation model based on user and service clustering to improve the recommendation accuracy. Their recommendation model is based on a three-step process (1) extract the context features of the web service, (2) similarity clustering based on the QoS attributes of the history calling service, and (3) the cooperative service function adopts the improved recommendation algorithm for QoS prediction.

The manuscript is well written and organized, though lacks of grammar/spelling check to fix minor mistakes. The experimental setup is adequate enough and the proposed approach as shown by the research results seems to outperform some mainstream recommendation algorithms, but I have a few concerns to raise. 

  1. I would like to see an extended comparison of the proposed algorithm performance 
  2. How is the knn performance for a number of top recommended product less than 10. How about top recommendations for k=1 for example?
  3. My biggest concern is the insufficient analysis of the experimental results. The proposed algorithm is very interesting but the research findings are inadequately presented in terms of performance.
  4. Finally, I would like to see how the proposed algorithm performs in terms of scalability.

Author Response

Dear Reviewers,

Thank you for your comments and suggestions. Our responses have also been organized into a document, please refer to the attachment, many thanks!

Please check the response below:

Point 1: I would like to see an extended comparison of the proposed algorithm performance.

Response 1: We are totally sorry for the absence of an extend comparison of the proposed algorithm performance. In this revised manuscript, we have added. The relevant part of the statement has been modified (Figure 7)

      

Point 2: How is the knn performance for a number of top recommended product less than 10. How about top recommendations for k=1 for example?

Response 2: The choice of k will have a significant impact on the results of the algorithm. If k is small, it is equivalent to using a training instance in a small neighborhood to make a prediction. When K < 10, the recommendation accuracy is affected. A small value of K means that the overall model becomes complex and prone to overfitting.

 

Point 3: My biggest concern is the insufficient analysis of the experimental results. The proposed algorithm is very interesting but the research findings are inadequately presented in terms of performance.

Response 3: Thank you for your comments. Web services clustering, user clustering, and matrix decomposition models that predict QoS values to generate recommendations are the keys of the RMUSC. A large number of experiments were conducted on the real data set provided by WSDream, the effect of the recommended model and the influence of related parameters were verified and analyzed. By analyzing MAE and RMSE of our approach and other four mainstream service recommendation methods, it is concluded that the proposed method has higher recommendation performance. In the near future, we hope to try online real-time processing, further optimize and improve the architecture, improve data acquisition and analytical processing power.

Point 4: Finally, I would like to see how the proposed algorithm performs in terms of scalability.

Response 4: Thank you for your comment. In this part, this paper describes in detail the experiment using data set WSDream. The dataset contains 339 user invocations of 5,825 real-world web services, resulting in more than 1.5 million invocation records. 80% of the user call records were used to train our algorithm to get the best values for the relevant parameters. The remaining 20% of the user data is used to validate our algorithm model. In this paper, some QoS data are randomly deleted in the data set, so that the matrix density of R is controlled at 5%, 10% and 20%. The higher the matrix density is, the more data is available. To verify the reliability of the experiment, we repeated the experiment ten times at each matrix density. The result of many experiments is stable, which proves the scalability of our algorithm is good.

Best wishes,

Fei

Author Response File: Author Response.pdf

Reviewer 2 Report

The paper investigates a clustering model for QoS-aware service recommendation. The research is of high quality and the analysis is extensive. Moreover, the authors have conducted a wide bibliography research. The results are depicted in numerous grafs ad figures. The conclusions are interesting while the proposed model performance is highlighted in terms of recommendation efficiency and accuracy.

Author Response

Dear Reviewers,

Thank you for your comments and suggestions. Our responses have also been organized into a document, please refer to the attachment, many thanks!

Please check the response below:

Point 1: The paper investigates a clustering model for QoS-aware service recommendation. The research is of high quality and the analysis is extensive. Moreover, the authors have conducted a wide bibliography research. The results are depicted in numerous graphs ad figures. The conclusions are interesting while the proposed model performance is highlighted in terms of recommendation efficiency and accuracy.

Response 1: Thanks for your comments. We will continue to work on improving and perfecting the model so that we can get better performance.

Best wishes,

Fei

 

Reviewer 3 Report

This paper introduces a QoS-aware web service recommendation model based on user and service clustering to improve the recommendation accuracy. The proposed idea seems interesting but the following comments should be further clarified.

  1. The contribution is not clear. In introduction, the authors should try to explain their research motivation, the original idea and emphasize the unique contribution of this paper. What is your original contribution? The current version cannot be well supported by the references and experimental results. Write what do you propose not what do you apply.
  2. The organization of the related works should be improved. Figure 1 can be considered to be listed as a subsection of the background or merged with Section 3. Besides, more references should be included to highlight the research gap and motivation.
  3. The details of the experimental setup should be clearly stated. The parameters of the database and the training and testing process should be clearly provided. How to evaluate the results of the clustering? How to identify the parameter K in Section 6.1? How long about the training and testing time? Have you conducted the cross-validation?

Rather than run the training and testing only one time, it is suggested to introduce cross-validation to further enhance the evaluation (i.e. five-folds or ten-folds).

  1. In line 320-321, page 11, the authors list the proposed RMUSE and four other algorithms. However, I cannot find the RMUSE in Figure 5 and Figure 6. Please check through the manuscript to make it consistent.
  2. A discussion part should be introduced to explain the difference between your method and other approaches. What is your strength and limitations?
  3. The conclusions are not well supported by the results. In the conclusion, the authors claim that “the experiment results shows that the proposed model outperforms the other mainstream recommendation algorithms in terms of recommendation efficiency and accuracy” (see line 373-374, page 14). However, I cannot find any efficiency index in the results.
  4. The format is rather messy and the style of the bibliography is not consistent. Please check through the guideline and reorganize the manuscript to make it readable.
  5. English should be largely improved by a native speaker. I saw lots of spelling errors during reading the paper. Careful correction is required.

Author Response

Dear Reviewers,

Thank you for your comments and suggestions. Our responses have also been organized into a document, please refer to the attachment, many thanks!

Please check the response below:

Point 1: The contribution is not clear. In introduction, the authors should try to explain their research motivation, the original idea and emphasize the unique contribution of this paper. What is your original contribution? The current version cannot be well supported by the references and experimental results. Write what do you propose not what do you apply.

Response 1: We are totally sorry for the unclear of our original contribution. In this revised version, we have given a persuasive description to explain a substantive work carefully. The relevant part of the statement has been modified in Section1. Moreover, we cite more literature in the Section2 to support our paper

 

Point 2: The organization of the related works should be improved. Figure 1 can be considered to be listed as a subsection of the background or merged with Section 3. Besides, more references should be included to highlight the research gap and motivation.

Response 2: Thank for your constructive comments. In this revised version, Figure 1 has been merged with Section3. And we are totally sorry for the absence of references to highlight the research gap and motivation, we have added some papers. The relevant part of the statement has been modified in Section2.

 

Point 3:The details of the experimental setup should be clearly stated. The parameters of the database and the training and testing process should be clearly provided. How to evaluate the results of the clustering? How to identify the parameter K in Section 6.1? How long about the training and testing time? Have you conducted the cross-validation? Rather than run the training and testing only one time, it is suggested to introduce cross-validation to further enhance the evaluation (i.e. five-folds or ten-folds).

Response 3: We are totally sorry for our lack of the illustration about details and thank you for your suggestions. In this revised version, we have corrected. The relevant part of the statement has been modified as follows:

this paper describes in detail the experiment using data set WSDream. The dataset contains 339 user invocations of 5,825 real-world web services, resulting in more than 1.5 million invocation records. 80% of the user call records were used to train our algorithm to get the best values for the relevant parameters. The remaining 20% of the user data is used to validate our algorithm model. The accuracy of clustering (precision) is an important indicator to measure the effect of clustering, we use it to evaluate the result of the clustering (Figure5). Randomly select k objects as the initial centroids of k clusters, then, assign the remaining objects to the nearest cluster according to their distances from the centroids of each cluster; This iterative relocation process is repeated continuously until the centroid of the cluster no longer changes.

 

Point 4: In line 320-321, page 11, the authors list the proposed RMUSE and four other algorithms. However, I cannot find the RMUSE in Figure 5 and Figure 6. Please check through the manuscript to make it consist

Response 4: We are totally sorry for our absence of checking the manuscript. In this revised version, we have checked through mistakes to make it consist.

 

Point 5: A discussion part should be introduced to explain the difference between your method and other approaches. What is your strength and limitations?

Response 5: Thanks for your constructive comments. We are also totally sorry for the absence of a discussion part to explain the difference between our method and other approaches. In this revised version, we have discussed the strength and limitation. The relevant part of the statement has been modified and added in section7.

 

Point 6: The conclusions are not well supported by the results. In the conclusion, the authors claim that “the experiment results shows that the proposed model outperforms the other mainstream recommendation algorithms in terms of recommendation efficiency and accuracy” (see line 373-374, page 14). However, I cannot find any efficiency index in the results.

Response 6: We are totally sorry for the detailed explanations. Figure8 and Figure9 are proposed to show our method get smaller value of MAE and RMSE evaluation parameters than the other four mainstream recommendation algorithms. This proves that our recommendation approach has better recommendation accuracy.

 

Point 7: The format is rather messy and the style of the bibliography is not consistent. Please check through the guideline and reorganize the manuscript to make it readable.

Response 7: We are totally sorry for our messy format and the style of the bibliography. In this revised version, we have carefully checked and modified.

 

Point 8: English should be largely improved by a native speaker. I saw lots of spelling errors during reading the paper. Careful correction is required.

Response 8: We are totally sorry for our mistakes. We regret there were problems with the English. In this revised version, the paper has been carefully revised by a professional language editing service to improve the grammar and readability.

Best wishes,

Fei

 

Author Response File: Author Response.pdf

Reviewer 4 Report

In this paper, the authors propose a new recommendation model by jointly considering the impact of service function characteristics and similar user preferences.

This paper is readable and well-written.  However, I also have some comments on this paper.

The authors must insert a section (Limits and suggestions for future works) by detailing the discussions on the limitations of the research, implementation practice guideline, and future studies. 

Furthermore, the authors should expand the Section 2. List some suggestions: 

1. Flexible QoS-aware services composition for service computing environments ME Khanouche, H Gadouche, Z FarahA Tari - Computer Networks, 2020 - Elsevier

2. Combining reputation and QoS measures to improve cloud service composition Messina, F.Pappalardo, G.Comi, A., (...), Rosaci, D.Sarné, G.M.L.

3. Q-Graphplan: QoS-Aware Automatic Service Composition With the Extended Planning Graph Z Wang, B Cheng, W Zhang, J Chen - IEEE Access, 2020

Author Response

Dear Reviewers,

Thank you for your comments and suggestions. Our responses have also been organized into a document, please refer to the attachment, many thanks!

Please check the response below:

Point 1: The authors must insert a section (Limits and suggestions for future works) by detailing the discussions on the limitations of the research, implementation practice guideline, and future studies.

Furthermore, the authors should expand the Section 2. List some suggestions:

  1. Flexible QoS-aware services composition for service computing environments ME Khanouche, H Gadouche, Z Farah, A Tari - Computer Networks, 2020 - Elsevier
  2. Combining reputation and QoS measures to improve cloud service composition Messina, F., Pappalardo, G., Comi, A., (...), Rosaci, D., Sarné, G.M.L.
  3. Q-Graphplan: QoS-Aware Automatic Service Composition With the Extended Planning GraphZ Wang, B Cheng, W Zhang, J Chen - IEEE Access, 2020

Response 1: Thank you for your constructive suggestions. We are totally sorry for the absence of papers in reference. In this revised version, we have expanded the Section 2. Moreover, some future works and limits have been shown in Section7.

 

Best wishes,

Fei

 

Round 2

Reviewer 1 Report

In the revised version of the manuscript, authors have provided sufficient responses to the points I have raised in my initial review. However, I would suggest minor corrections to improve the paper's readability.

For example, English language corrections should be made as well as consistency in the section headings' format (i.e. 4. or 4?). In general, paper's quality and presentation have been improved in this revision and I have no further points to raise.

Author Response

Dear Reviewers,

 

Thank you for your comments and suggestions. We have revised the manuscript according to your suggestions. The revised details are shown in the attachment Manuscript named “Electronics-R2's revised version”. Many thanks!

 

Best wishes,

Author Response File: Author Response.pdf

Reviewer 3 Report

The authors have addressed my queries. However, the presentation can be further improved to make it more readable as a journal paper.

Author Response

Dear Reviewers,

 

Thank you for your comments and suggestions. We have arranged the content according to your suggestions. The revised details is shown in the attachment Manuscript named “Electronics-R2's revised version”. Many thanks!

 

Best wishes,

 

Fei

Author Response File: Author Response.pdf

Back to TopTop