Next Article in Journal
Exploring Skin Interactions with 5G Millimeter-Wave through Fluorescence Lifetime Imaging Microscopy
Next Article in Special Issue
Artificial Intelligence Application in the Field of Functional Verification
Previous Article in Journal
CBin-NN: An Inference Engine for Binarized Neural Networks
Previous Article in Special Issue
Development of Autonomous Mobile Robot with 3DLidar Self-Localization Function Using Layout Map
 
 
Article
Peer-Review Record

AI for Automating Data Center Operations: Model Explainability in the Data Centre Context Using Shapley Additive Explanations (SHAP)

Electronics 2024, 13(9), 1628; https://doi.org/10.3390/electronics13091628
by Yibrah Gebreyesus 1,*, Damian Dalton 1, Davide De Chiara 2, Marta Chinnici 3 and Andrea Chinnici 4
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Electronics 2024, 13(9), 1628; https://doi.org/10.3390/electronics13091628
Submission received: 27 February 2024 / Revised: 18 April 2024 / Accepted: 22 April 2024 / Published: 24 April 2024
(This article belongs to the Special Issue Advances in AI Engineering: Exploring Machine Learning Applications)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The use of SHAP for enhancing explainability in AI and ML models is documented extensively in the literature. The manuscript's emphasis on model explainability within data centers does not significantly advance the knowledge in this area. Here are my recommendations to the authors:

  1. 1) The manuscript does not provide a detailed comparative analysis with other explainability methods. The justification for choosing SHAP exclusively for this application is not provided.


  2. 2) The paper lacks detailed information on data preprocessing steps, model selection criteria, and validation processes. This detail is crucial for enabling readers and practitioners to replicate the study's results and apply the findings in different settings.

  3.  
  4. 3) Despite employing various visualization tools to demonstrate the impact of features on model predictions, the paper does not thoroughly evaluate the effectiveness of explainability.

  5.  
  6. 4) The paper does not discuss the computational complexity of implementing SHAP in the real-time operational context of data centers. Considering the dynamic and time-sensitive nature of data center operations, it is vital to understand the computational requirements to evaluate the feasibility of SHAP's application in these environments.

  7.  

Comments on the Quality of English Language

It is necessary review the manuscript's English language, formatting, and citation of references.

Author Response

 

Dear reviewer, We really appreciated the time you took to review our contribution. 

  1. We include literature and clearly stated our justifications why we employed SHAP. 
  2. We clearly stated the data source and pre-processing, and we justified why we selected RF, XGB, and LSTM in the context of DCs. We performed the necessary feature engineering and feature selection techniques to identify relevant features, as they have the advantages of providing insights, improving model performance, and providing training resources. We also trained and validated the models by splitting the dataset into 80% for training and 30% for testing and evaluating them using error metrics.
  3. The effectiveness and practical implications of SHAP are discussed. 
  4. we noted it very well and discussed in detail the computational complexity and  real-time operational challenges. Generating SHAP explanations for predictions in real-time may not always be feasible due to the computational overhead of calculating SHAP values, especially for complex models or large datasets. We explore and put the challenging tasks in limitation section

Thanks in advance

regards 

Reviewer 2 Report

Comments and Suggestions for Authors

For the Manuscript "AI for Automating Data Center Operations: Model Explainability in the Context of Data Centre Using SHAP", follwoing queries need to be addressed .

#Introduction

1. The contribution discussed in the manuscript needs to be reframed.

2. Motivation is missing

#Meterials and Methods

1. A section may be added which will discuss the related work and the work done in XAI, which is missing in the Manuscript. Also the Machine Learning and its role need to be discussed and this article may be helpful "Khanday, A. M. U. D., Khan, Q. R., & Rabani, S. T. (2020, December). Analysing and predicting propaganda on social media using machine learning techniques. In 2020 2nd International Conference on Advances in Computing, Communication Control and Networking (ICACCCN) (pp. 122-127). IEEE."

2. The authors are using only Shapley Additive Explanations (sHap) why not other techniques like LIME.

3. How does authors validate there approach and does authors perform any feature reduction techniques and the data source need to be mentioned which would be helpful for other relavant researchers.

4. Comparision with previsous work is missing .

5. Limitations of the work need to be discussed

6. There are few grammatical mistakes which needs to be rectified

Comments on the Quality of English Language

Satisfactory

Author Response

Dear reviewer we really appreciated for your time for reviewing out contribution. We revise our work and address the issues raised.

#introduction:

1. The contribution discussed in the manuscript needs to be reframed.

2. We revised it, and the main motivation to conduct this work is the increasing use of AI and ML in the DC. As they are black boxes, XAI has become a great concern. This motivated us to conduct this research to improve the model's transparency and trustworthiness. In addition to this, capturing temporal-changing behaviour of feature relevance using static SHAP is challenging in such a dynamic environment, which motivated us to conduct this work.

 #methods

  1. yes. we include related work and the work done in XAI.
  2. we put extensive justifications why we focused on SHAP method.
  3. we performed the necessary pre-processing and feature selection techniques and the data source is clearly stated (a HPC data center, CRESCO6 cluster from Italy)
  4. yes. as far as our knowledge, we could not find methods implemented in the DC context. 
  5. agreed. we put limitation section. 

with best regards 

Reviewer 3 Report

Comments and Suggestions for Authors

Dear authors,

thank you very much for submitting your article to the Electronics journal. Overall, I think your article addresses the very important objective of energy managment in data centers. I also find the idea beneficial of deriving rules for operation from the creation of a machine learning model and using explainable AI techniques. Here are my remarks:

Structural remarks:

- The chapter heading 2.2 should make reference to the ML techniques described there

- The conclusion chapter is much too short

Clarity and context:

- All abbreviations must be written out in full the first time they are used

- 2., figure 1: The figure is very generic, consistency "Includes:" / "includes cleaning", what is the meaning for the icons for testing data and the two icons for training data?, are the icons royalty-free?

- 2.1: The list of input parameters should be put into an appendix

- 2.1, table 1: Use separators for thousands

- 54 input features in text to 57 feautres in table 1

- The hyparameters of 2.2.1 and 2.2.2 should be visualized in a table like in 2.2.3

- The NN structure of 2.2.3. should be visualized graphically

- There are many typing mistakes: efore (l.178),  follws (l.196), "." missing (l.244), "fu" (l. 320)

- Search for non academic expressions like: strong, straightforward, we

- Not clear / expression, l. 215: The input includes input features and specified time steps of the sequence, with a 15

window size of 10 time_steps at a 15-minute resolution. 

- Unclear: fair rewards for every player, local accuracy, null effect

- Figure caption should be much shorter and contain in the beginning the name of the diagram, e.g. Waterfall plot of ...

Methodology:

- 2.3: The selection of SHAP is very rough. 

- If you want to learn about the specific effects of features in an optimization, why not conduct a DOE and find out about the effect of every feature and interactions? 

- Table 3: There are more things to discuss about the results shown in this table, e. g. why are LSTM networks so much better, why go on with RF/XGB then?

- Conclusions on figure 8 are only roughly described

- Why is  "supply_air" chosen for figure 7 and 8

References:

- The number of references is on the lower limit for an academic paper (rule of thumb: 1,5 references per page)

- Check for consistency for page ranges according to the chosen citation style.

12. -> A very good book, but incomplete bibliographic data  and not the best choice for an academic journal

21. No "et. al" in list of references

22. / 23. Bad source, incomplete bibliographic data

Originality and significance

You set yourself the goal of using explainable AI techniques to derive instructions for operations of a data center. But I see flaws in deriving this convincingly. You only give brief exemplary recommendations (l. 295ff, l.341ff). A detailed discussion and derivation of more practical instructions is missing. So I feel, that you fail to achieve the goal you set yourself.  In conclusion I therefore wonder, what the added value of your article is: You develop three machine learning models with data center variables and the target variable of ambient temperature and apply SHAP to these models using an available Python library. Furthermore, the discussion of the results and the derivations of recommendations leave a lot of room for improvements. 

All in all, I think the topic of the article is very relevant. However, there are many weaknesses in the article. Overall, the article could have been written more carefully. The discussion of partial results could have been much more detailed and methodological steps are not explained well enough. Ultimately, there is no detailed derivation and discussion of recommendations for the operation of data centers. This limits the article's overall added value.

A major review is recommended.

Best regards,

Reviewer

 

Comments on the Quality of English Language

Please see remarks above.

Author Response

Dear reviewer we really appreciated for your time reviewing our contribution. As much as we can addressed the issues raised. # introduction

  1. Thanks. we addressed the issues clarity and typo errors
  2. we include literature and detailed why SHAP is selected and used in this work.
  3. we also justify why we implemented RF, XGB and LSTM to demonstrate XAI in the DC context. 
  4. all other heading, sections and subsections are reformatted and corrected. 
  5. we also detailed the conclusion section.
  6. we corrected the bad citations and references

with best regards 

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The revised version of the manuscript is much better and has adressed all previously raised issues. Thus, the manuscript can be accepted for publication.

Comments on the Quality of English Language

There are minor editing of English language required.

Author Response

Dear Reviewer,

Many thanks again for taking the time to review our contribution.

 
We reviewed the comments and suggestions and then carefully revised and improved all the contents.

 

with best regards

Reviewer 3 Report

Comments and Suggestions for Authors

Dear authors,

thank you very much for your efforts to revise your article. For your next review I kindly ask you to explicitly comment on every remark of the reviewer by giving the specific remark, the location of changes and a summary of your changes. It is almost impossible for me as reviewer to track your changes in a document which is almost completely highlighted in yellow color! I also dislike the typing mistakes in the section “Authors' Responses to Reviewer's Comments”.

As far as I can see you carefully considered my remarks. But I still have the following remarks:

-       Please take care on consistency and start text in tables with capital letters and consistent format -> table 1.

-       Please avoid chain citations with more than three references in a row (l. 24, l.29 etc.).

-       Please use short captions for figures (figure 5ff). Longer descriptions are part of the explaining text.

-       Please add an introduction between section 5 and 5.1.

-       Please comment on the limitations which are mentioned in section 4 in the conclusion section.

Overall, a minor revision is necessary.

Best regards,

Reviewer

Author Response

Dear Reviewer,

many thanks again for taking the time to review our contribution.

 
We reviewed the comments and suggestions and then carefully updated and revised all the content.

 

With best regards

Back to TopTop