Next Article in Journal
Vector Fitting–Cauchy Method for the Extraction of Complex Natural Resonances in Ground Penetrating Radar Operations
Previous Article in Journal
ZenoPS: A Distributed Learning System Integrating Communication Efficiency and Security
Previous Article in Special Issue
Revisiting the Design of Parallel Stream Joins on Trusted Execution Environments
 
 
Article
Peer-Review Record

Performance Evaluation of Open-Source Serverless Platforms for Kubernetes

Algorithms 2022, 15(7), 234; https://doi.org/10.3390/a15070234
by Jonathan Decker 1,*, Piotr Kasprzak 2 and Julian Martin Kunkel 1,2
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Algorithms 2022, 15(7), 234; https://doi.org/10.3390/a15070234
Submission received: 27 May 2022 / Revised: 24 June 2022 / Accepted: 28 June 2022 / Published: 2 July 2022
(This article belongs to the Special Issue Performance Optimization and Performance Evaluation)

Round 1

Reviewer 1 Report

The article is well written, has a well presented structure and detailed methodology of the experiments. The results show that there is still much to be studied in the area. Benchmarks such as these should be developed and improved, seeking a greater variation of situations in these platforms, as well as in other similar ones.

Author Response

Thank you for taking the time to review our manuscript. We appreciate your support for our work. Nevertheless, we took the opportunity to make a few improvements to our submission:
- We made some improvements to our English text, fixed a few typos and improved the wording in some places,
- We fixed an issue with the bibliography not correctly showing the years of references as we had exported our references as Biblatex instead of Bibtex as required by the template,
- We expanded the description of the experiment setup to also include a description of the network,
- We added some details to the benchmark description such as what images were used for the image processing test,
- We prepared a GitHub repository that holds the complete source code used in the study as well as a setup description for reproducing it,
- We expanded our literature review by adding a section "related work" under "introduction" that further places our study in the context of benchmarking serverless platforms

We would like to thank you again for taking the time to review our manuscript.

Reviewer 2 Report

1.      This paper presents an evaluation of two open-source serverless platforms OpenFaaS and Nuclio for Kubernetes. Anyway, there are some comments as follows.

2.      All abbreviations should be introduced before use, e.g. HPC, SLURM, AWS, etc.

3.      The publication’s year of all references should be added.

4.      The literature review of this manuscript is weak. More related works can be added. Also, their pros, cons, and other important parameters of them should be compared and explained.

5.      The system model or architecture of the system is missed in one or some figures. All hardware, software, network layers, algorithm steps, inputs, and outputs should be indicated.

6.      All source codes should be removed from the paper and presented on GitHub then authors can insert the link in the paper.

 

7.      There is plenty of research about Open-Source Serverless Platforms for Kubernetes. The contribution of this paper is not enough. Also, as an analysis paper, more comparisons and detailed evaluations are required.

Author Response

Thank you for taking the time to review our manuscript. We have addressed the concerns you mentioned and changed our submission accordingly.
Overall, we improved our English text, fixed a few typos and changed the wording in some places.

Regarding comment 2: We added long versions to the abbreviations you mentioned (HPC, AWS) as well as API and MPI and clarified that SLURM should be written as Slurm as it is not an abbreviation but the name of a program. We assume that readers will be able to understand abbreviations such as CPU, GPU, TCP and MiB without further explanation.

Regarding comment 3: Thanks for pointing this out. This was a mistake on our side as we had exported our references using the Biblatex format instead of the Bibtex format required by the template. The years are now correctly added to the bibliography.

Regarding comment 4 and 7: Our literature review was indeed a bit short. We have added a subsection "related work" to the section "introduction" that references 6 more papers and helps to establish the context of our study.

Regarding comment 5: We expanded the description of the experiment setup by adding a paragraph that describes the network and all steps a request needs to go through. Furthermore, we added some more details to the Test and Benchmark section such as what images were used exactly for the image processing test. All required parts should now be indicated in the text.

Regarding comment 6: We agree that all source code should be published to ensure comparability and reproducibility of our study. We prepared a GitHub repository with all the code used for the study as well as a description for setting up the environment. We linked the repository at the end of the Test and Benchmark section and at the beginning of the Appendix. Nevertheless, we prefer to keep the appendix as it is because it makes it easier to follow our argument about having to copy or not to copy input data into a function. We find that having to look through the repository to find the code supporting the argument would make it less clear.

We would like to thank you again for taking the time to review our manuscript.

Reviewer 3 Report

The paper deals in depth with the issue of performance comparison between two different open source serverless platforms hosted on a Kubernetes environment. The topic is really cool considering the wide adoption of serverless architectures in modern cloud environments.
The authors conducted a very well-designed and realised performance comparison between the OpenFaaS and Nuclio platforms. An extensive discussion of the results shows that Nuclio performs better than OpenFaas, but still, the performance of the two software is not up to the standard of HPC systems.
It is important that this study is extended to other open source serverless platforms. 

Author Response

Thank you for taking the time to review our manuscript. We appreciate your support for our work. Nevertheless, we took the opportunity to make a few improvements to our submission:
- We made some improvements to our English text, fixed a few typos and improved the wording in some places,
- We fixed an issue with the bibliography not correctly showing the years of references as we had exported our references as Biblatex instead of Bibtex as required by the template,
- We expanded the description of the experiment setup to also include a description of the network,
- We added some details to the benchmark description such as what images were used for the image processing test,
- We prepared a GitHub repository that holds the complete source code used in the study as well as a setup description for reproducing it,
- We expanded our literature review by adding a section "related work" under "introduction" that further places our study in the context of benchmarking serverless platforms

We would like to thank you again for taking the time to review our manuscript.

Round 2

Reviewer 2 Report

All comments have been addressed. The manuscript is fine now.

Back to TopTop