Next Article in Journal
Exploring Variability of Visual Accessibility Options in Operating Systems
Previous Article in Journal
Network Theory and Switching Behaviors: A User Guide for Analyzing Electronic Records Databases
Previous Article in Special Issue
A DFT-Based Running Time Prediction Algorithm for Web Queries
 
 
Article
Peer-Review Record

Globally Scheduling Volunteer Computing

Future Internet 2021, 13(9), 229; https://doi.org/10.3390/fi13090229
by David P. Anderson
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Future Internet 2021, 13(9), 229; https://doi.org/10.3390/fi13090229
Submission received: 18 August 2021 / Revised: 30 August 2021 / Accepted: 31 August 2021 / Published: 31 August 2021
(This article belongs to the Special Issue Parallel, Distributed and Grid/Cloud/P2P Computing)

Round 1

Reviewer 1 Report

This is an excellent work where the author propose a scheduling policy for volunteer computing with good results. Also, the paper is well written and easy to follow. H. The paper can be accepted in the present form.

This work addresses volunteer computing and I think that this is an interesting research topic and relevant for this journal.
Originally, the author has proposed the BOINC platform for volunteer computing but this work presents some improvements and a modified scheduling policy in the Science United system using the BOINC platform.

In general, I believe this is an excellent work where the author proposes a global scheduling policy with good results.

Also, the paper is very well written; clear, precise, and easy to understand. The results are interesting.

Based on previous comments, I recommend that the paper can be accepted in the present form.

Author Response

Thanks you for your review.

Reviewer 2 Report

The information about non-BOINC VC should be provided in Introduction, Folding@home, etc.

The paper should compare the scheduling mechanisms present in other VC projects and also used in HTC (I realize they are simpler due to the nature of HTC).

3.1 (224) "Active Science United volunteers average 4.8 “yes” keywords and 0.83 “no” keywords" - missing verb ( what is total number of places to click yes/no)

Please number equations.

Section 4 should be rewritten. The presentation of scheduling mechanism is not clear.
4.4 A(P) - is "computing" done by project - so it is FLOPS*time ? or time*number of CPUs ? how to compare CPU and GPU as "computing done" ?
"To avoid this, we dynamically adjust A(P) by an appropriate amount" ?

4.7 I would like to see equation for "score". The virtualbox case is due to need of network load minimization and VB image transfer ? Keyword factor changes value when "yes" or "as needed" is set for project ? or number of "yes" ?

The goal of the paper is to present scheduling mechanism, and the author give us some hints how it is done, but we cannot see how it works without knowing details.

In Section 5 are given numbers of jobs and performance, are these average values (over what period of time) or maximum ?

Fig.3 and 4  please add year (2020)

Please remove sentences similar to your paper published in Journal of Grid Computing.

More references about methods of scheduling in computer systems should be provided and addressed in introduction or other section.

 

Author Response

1) I added a reference to Folding@home.

2) I added a Related Work section,
comparing the work described here with other
volunteer computing systems (OurGrid)
and with Grid systems (Globus, Open Science Grid).

3) I added the word "selected" and gave the total # of keywords

4) I added equation numbers.

5) Section 4 is about as clear as I can make it; it describes
- how work is accounted
- the factors that go into the policy
- how these factors are combined in a score function that ranks projects
- how ranked projects are selected in a way that uses all processing resources
The policy is complicated; describing it fully is inherently complicated.

6) I clarified what A(P) is (section 4.4)

7) I clarified how A(P) is adjusted when a computer is assigned to P.

8) I added an equation for the score function,
and explained its terms.

9) The numbers in section 5 are averages over the last month; I added this.

10) I added "in 2020" to the captions for figures 3 and 4.

11) I compared this with my paper from the Journal of Grid Computing.
Not surprisingly, there are a few similar sentences in the first paragraph or two.
I changed these to make them slighly less similar.

12) I'm not sure if a general reference for job scheduling would help,
and I wouldn't know which one to use.

Back to TopTop