Next Article in Journal
Modeling Bimodal Social Networks Subject to the Recommendation with the Cold Start User-Item Model
Next Article in Special Issue
Self-Adaptive Data Processing to Improve SLOs for Dynamic IoT Workloads
Previous Article in Journal
Leveraging Blockchain Technology to Break the Cloud Computing Market Monopoly
Previous Article in Special Issue
Towards Self-Aware Multirotor Formations
 
 
Article
Peer-Review Record

A Taxonomy of Techniques for SLO Failure Prediction in Software Systems

by Johannes Grohmann 1,*, Nikolas Herbst 1, Avi Chalbani 2, Yair Arian 2, Noam Peretz 2 and Samuel Kounev 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Submission received: 31 December 2019 / Revised: 4 February 2020 / Accepted: 5 February 2020 / Published: 11 February 2020
(This article belongs to the Special Issue Applications in Self-Aware Computing Systems and their Evaluation)

Round 1

Reviewer 1 Report

1.This paper presented a review on failure prediction methods in software system and analyzed a taxonomy of failure prediction technologies from three different dimensions. It is valuable to give such survey on failure prediction. However, some similar and related literatures are not mentioned in the current form, such as: some important “Architecture-aware online failure prediction” methods are not mentioned in this paper, which are listed as follows:
[1] Architecture-aware online failure prediction for software systems. University of Stuttgart, Germany, 2018.
[2] Hora: Architecture-aware online failure prediction. Journal of Systems and Software 137: 669-685 (2018).
[3] Seer: A Lightweight Online Failure Prediction Approach. IEEE Trans. Software Eng. 42(1): 26-46 (2016).
It suggest that it needs a major revision. 
2.The paper is well prepared and in principle acceptable. Before final acceptance, there are a few issues.
(1)Firstly, some methods are not clearly presented, for example, the time series forecasting approach, performance prediction approach and so on.
(2)Secondly, as the authors have already summarized a number of algorithms as well as their strengths and weaknesses, why not showing a systematic comparison on the widely used algorithms. Such kind of comparison will be the most significant part in this survey.
(3)Lastly, there are some typos and errors (including the style of references, e.g., ref. [43],[52]). The authors should carefully check the whole manuscript.  

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The authors have presented a taxonomy derived from the related works. The taxonomy is then validated by conducting a systematic mapping study. However, the paper, however, has several drawbacks as it currently stands. The drawbacks are mentioned section wise.

Abstract:

Abstract is too short. It would be better if authors explain a little about taxonomy in abstract as well.

Introduction:

As stated in introduction, the authors have twofold contributions 1) taxonomy and 2)conducting systematic literature mapping study of related works into proposed taxonomy to validate the taxonomy. However,

Related Work:

Related work is very short. I recommend authors to add more related works and compare them with their study to show contribution clearly. Making a table with fields, e.g. study, contribution etc.

Methodology:

As authors described, the taxonomy is derived from  59 scientific papers, 4 PhD theses,

and 1 US patent in the final taxonomy categorization. The authors did not provide reference of these extracted studies.

 

Taxonomy:

One of the classification parameters is “ modeling type”. The  Ontology is one of the modeling type to predict or detect failures.

Some related works that use Ontology for failure prediction or detection.

http://dx.doi.org/10.3390/computers7040068 http://dx.doi.org/10.1002/cpe.4481 https://doi.org/10.1002/cpe.4481 http://dx.doi.org/10.1109/ICRSE.2017.8030746 https://doi.org/10.1016/j.jii.2017.02.006

 

Survey Results :

More results should be included in order to validate the taxonomy. As mentioned above, the ontological aspect of failure prediction and detection is missed.

Threats to validity section is also missing. It is necessary to mention the threats to validity and limitations of the study.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Thank you for providing a response in detail. The modifications have improved the manuscript.

Again i have a little concern about the failure modeling with ontology which was missing. However, the manuscript is already in second round of review and has enough quality to be published in current form.

Author Response

Dear reviewer,

we would like to thank you again for the constructive feedback, as well as the short feedback cycles. We think that the reviews significantly improved the quality of the submission and also added a small "Thank you" note in the Acknowledgements.

Back to TopTop