Next Article in Journal
Optimal Motorcycle Engine Mount Design Parameter Identification Using Robust Optimization Algorithms
Next Article in Special Issue
CNN Based on Transfer Learning Models Using Data Augmentation and Transformation for Detection of Concrete Crack
Previous Article in Journal
Research of Flexible Assembly of Miniature Circuit Breakers Based on Robot Trajectory Optimization
 
 
Article
Peer-Review Record

Validating and Testing an Agent-Based Model for the Spread of COVID-19 in Ireland

Algorithms 2022, 15(8), 270; https://doi.org/10.3390/a15080270
by Elizabeth Hunter * and John D. Kelleher *
Reviewer 1: Anonymous
Reviewer 2:
Algorithms 2022, 15(8), 270; https://doi.org/10.3390/a15080270
Submission received: 14 July 2022 / Revised: 29 July 2022 / Accepted: 1 August 2022 / Published: 3 August 2022
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation)

Round 1

Reviewer 1 Report

I have enjoyed reading this manuscript. Model is described in great detail. I have found its formulation sound and in line with existing literature. Model's validation procedure is carried out very scrupulously, discussing and addressing issues that may arise due to scaling of the model. I have no issues with manuscript as it is.

Author Response

We would like to thank the reviewer for their review of our paper and their positive comments.  

Reviewer 2 Report

Dear Authors,

Thank you very much for the opportunity to review your manuscript. While I very much enjoyed reading about your work, I do have a number of questions and suggestions about your manuscript, mostly about the organization of the paper:

Major comments:

1. There are a number of subsections in the "Materials and methods" section that contain essentially results. Specifically, on page 8, you detail the results of the cross-validation results within model validation. Similarly, there are results of the sensitivity analysis at the end of page 11, and with the comparison to real data on page 12. I understand that you considered these as a preamble to the "actual" results, showing that one can trust the model. However, the title of your paper is "Validating and testing...", so the results of that validation and testing would need to be in the Results section to make sense. Another option is to decide that the main point of the paper is the impact of increased community mixing when schools open in the fall, and then have a title that reflects that. In that case, I could see the validation results remaining where they are, much abbreviated, or maybe put in a supplementary material. 

2. On the flipside, the methods for the simulation runs investigating the impact of increased community mixing is described in the Results section on page 13 in the 2nd and 3rd paragraphs. My recommendation would be to put all the methods in the Methods section, and then have all the results, including the validation and testing into the Results section, each into their own subsections. While I can see how that might break the flow of logic, I believe it would still make more sense.

3. You do have a large number of figures which could be easily reduced. Figures 1-6 could all be panels in a large multipanel figure, with Figures 1,3,5 on the left hand side, and Figures 2,4,6 on the right hand side. Similarly, Figures 9 and 10 seem to show the same information, there's no point having both of them. The two panels on Figure 10 could be overlaid on each other like on Figure 9, perhaps with different colors. 

4. My last major point relates to the timeliness of the information in your manuscript. The manuscript reads as if it was written in the Spring of 2021, and sat in a drawer since. Obviously, the situation currently is very different from what it was during 2020, with widely available vaccinations, different viral strains, and substantial herd immunity, and pretty much relaxed regulations. Now I understand that you cannot be expected to rerun all your simulations to reflect the current situation, which might change in a month or two. But I do expect that the current situation would be acknowledged and the limitations of the study mentioned in the Discussion, which I do not currently see. In a related manner, you do know at this point what happened in reality when schools opened at the end of 2020, and whether or not community mixing increased. Does that match with the results on Figure 9 and 10? Which does it match better? Freshening up the paper to make it more up-to-date would be worthwhile.

Minor comments:

1. On page 1, line 3, please replace "taking to long" to "taking too long".

2. In the Introduction, on page 2, you talk about agent-based models. However, you don't mention other agent-based model of COVID-19 models, although there are many. The one that comes to mind is COVASIM, developed by the Institute of Disease Modeling in Seattle, WA (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8341708/), but there are many others.

3. On page 3, section 2.1.2, you describe the Environment Component of your model. I understand that the patches in Netlogo are counties or cities in Ireland, and that each one has a a number of schools and workplaces. However, within a patch, is space discrete or continuous? Do schools or workplaces have lat/long or x/y coordinates, and do people move more to closer places? Or are these places within a patch homogeneously mixed, with equal distance?

4. On page 3, section 2.1.3, you mention the infectious components. However, on page 4, you mention presymptomatic individuals who are not mentioned on page 3. Why is that? How are those different?

5. On page 3, section 2.1.3, you mention how exposed agents will stay exposed for a time before becoming infectious, and infectious agents will stay infectious for a set time before becoming recovered. It sounds like that time is the same for every agent. However, later you write that it is different for each agent, based on a probability distribution. Please clarify that here.

6. On page 3, section 2.1.3, you say that recovered agents can no longer be infectious and cannot be re-infected. We know that this is not true for COVID-19, even for the same strain. Why did you choose this assumption then? How might this impact your findings?

7. On page 3, line 142, change "an agent" to "the agent".

8. On page 4, in section 2.1.4, you describe the transport of agents between different locations, including children to school. I do wonder, are teachers included in the schools as a workplace, since students and teachers can and do infect each other. If not included, how would that impact your results?

9. On page 7, section 2.2.1, you describe the model validation through cross-validation with your county model. Why is that a good comparison? How do we know that the original county model is a good benchmark? Has that been validated against data? How different is that single county model? You could potentially calculate some measure of discrepancy between the scaled model and the county model, for example a root-mean-squared difference, to quantify the deviation between average results.

10. On page 10, line 362, you state that you aim to pick the number of runs where the size of the confidence interval around Re is small enough. What is small enough? How is that determined?

11. On page 11, in section 2.2.2, you detail the methods and results of your sensitivity analysis. For your sensitivity analysis, you have very specific initial conditions, such as the number of infectious agents in different subclasses. Why did you pick these specific initial conditions? Do these represent the conditions in Ireland at a particular timepoint? Please clarify!

12. On page 11, you have a footnote about the scaling factor. Please move this into the text.

13. On Figure 7 and Figure 8, I would suggest including measures of uncertainty, sich as either individual runs, or a min-max range, standard deviation, standard error or confidence interval.

14. On page 12, section 2.2.3, you describe the comparison of simulations to real data. However, I can't see what the initial conditions were at the start of these simulations. Did they match the actual numbers? Or were they the same as in section 2.2.2? Please clarify!

15. On page 12, section 2.2.3, in the first paragraph, you describe the timeline of interventions in Ireland during the time period in which you compare simulation output to real data. My suggestion would be add a graph showing these interventions, such as GANNT chart, or something similar.

16. On page 12, section 2.2.3, line 461, you describe how there needs to be more than one infected agent in the scaled model, and how this causes discrepancy from the real data. This reminded me: did you have any simulation runs in which COVID-19 would go extinct? What did you do with those results? Did you omit those runs and rerun? Did they get included into the average? Or was there continous input of COVID-19 from outside such as immigration into Ireland?

17. On Figure 9, what date does Day 0 represent? Why does the simulation stop at 60 days? What date is that? Is that already winter break?

18. On Figure 11, the legend is confusing in terms denoting the colors for different parts of society, and for the line style showing the increased mixing or not. I would suggest making a legend where home, school, work and community are shown for both no change in mixing (solid line) and for increased mixing (dashed line).

19. On page 16, in the Discussion, I would appreciate references to COVID-19 models and agent-based models in particular. There are references cited in the Discussion, making it difficult to put your paper into the context of the larger literature. For example, how does your model and the results you obtained compare to similar published models (e.g. COVASIM)?

20. At the bottom of page 16, you describe how an extension of your model would allow for variation in scaling between agents and actual people in real life, with differences between how different groups are scaled. I guess I'm not quite clear on why this would be a good idea, and I worry that it would introduce artefactual results. Please clarify!

21. On page 17, lines 583-585, you suggest that your results might indicate that "opening schools might be relatively safe and the resulting increase in cases around school opening might be more impacted by the actions outside of schools than within schools". I'm not quite convinced of this rationale for your results. Can you support with an explanation of how the interaction between groups and movement would result in school opening being relatively safe? Are students interacting with less other students, than people in the community?

22. Finally, is your model publicly available, either as an executable, or as the source code for others to inspect, potentially reuse or further develop? If yes, where can the reader find it?

 

Author Response

We would like to thank the reviewers for their thorough review of our paper and their insightful comments. We think that addressing the comments has greatly improved our paper. Please see the attachment with a point-by-point response to the comments.

 

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Dear Authors,

Thank you very much for revising your manuscript so thoroughly and so quickly. I'm happy to read that you found my comments and suggestions useful, and I do believe that your manuscript is an excellent contribution to the literature on agent-based models of disease transmission. I'm looking forward to seeing more of your work in the future!

Krisztian Magori

Back to TopTop