Next Article in Journal
A Taxi Trajectory and Social Media Data Management Platform for Tourist Behavior Analysis
Previous Article in Journal
The Impact of HDA, Experience Quality, and Satisfaction on Behavioral Intention: Empirical Evidence from West Sumatra Province, Indonesia
Previous Article in Special Issue
How Networks of Citizen Observatories Can Increase the Quality and Quantity of Citizen-Science-Generated Data Used to Monitor SDG Indicators
 
 
Article
Peer-Review Record

A Practical Approach to Assessing the Impact of Citizen Science towards the Sustainable Development Goals

Sustainability 2022, 14(8), 4676; https://doi.org/10.3390/su14084676
by Stephen Parkinson *, Sasha Marie Woods, James Sprinks and Luigi Ceccaroni
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Sustainability 2022, 14(8), 4676; https://doi.org/10.3390/su14084676
Submission received: 28 February 2022 / Revised: 1 April 2022 / Accepted: 6 April 2022 / Published: 13 April 2022

Round 1

Reviewer 1 Report

The article has been developed based on data fro H2020 MICS project. The article aims at creating "a practical approach to assess the impact of 107 citizen-science projects towards the SDGs". The topic in itself is not very original, but this article presents new data that highlights SDG perspective in evaluating CS. It provides quantitative as well as qualitative  analysis which is overall a good match. Overall the quality of the article is high, however, the conclusions might be improved adding a more clear explanation of the main finding.

Author Response

The article has been developed based on data fro H2020 MICS project. The article aims at creating "a practical approach to assess the impact of 107 citizen-science projects towards the SDGs". The topic in itself is not very original, but this article presents new data that highlights SDG perspective in evaluating CS. It provides quantitative as well as qualitative  analysis which is overall a good match. Overall the quality of the article is high, however, the conclusions might be improved adding a more clear explanation of the main finding. 

Thank you for your comments on the manuscript. We have expanded the conclusion as per your suggestion. 

Reviewer 2 Report

The article “A practical approach to assessing the impact of citizen science towards the Sustainable Development Goals” brings an interesting framework to assess how citizen science projects contribuite to UN SDG. Impact categories are distinguished between monitoring SDG and achieving SDG, which significantly contribute to disentangling complex outcomes particularly related to citizen science projects. Hence, contributions can be practically assessed beyond the concept of citizen science as a non-standard data source. This tool will be incorporated in a more extensive impact assessment framework. Besides contributing significantly to this practical and conceptual issue, the presented framework was proposed and validated by a small number of researchers and project coordinators, which could lead to biased results. Even though bias sources related to project coordinators are acknowledged by the authors, it would be interesting to know if further validations of the tool are expected. Since there is a strong practical appeal to use the tool in a diverse range of projects, it would be recommended that validation processes include such diversity. In general, the impact assessment framework is a relevant advance to assessing SDG in citizen science projects with objective criteria. Detailed comments are below.

 

Lines 124-125: The interview process in two rounds is not clear. Why there were two rounds of interviews and what is the difference between the rounds?

 

Line 240 – Participants: What was the participants academic background? Were they from diverse institutions or mainly academic? Was there any participant from the social sciences? These details are important and should be included if possible.

Table 3 refers to the necessity of adapting language in some questions, which can also be influenced by the background of project coordinators.

 

Line 404: Bias could also arise from the authors perspectives, since you have selected the topics and questions at first (which comprehend a small portion of the targets and indicators). Since SDG targets and indicators were not equally assessed, this bias source could also be considered. Please comment whether mechanisms to avoid this kind of bias were implemented.

 

Assessing SDG and impact in citizen science projects is a complex task. Hence, it would be important to specify whether a question is referring to a) intentions or implemented actions. b) short, medium or long-term impact. My concern is that depending on the project, coordinators may respond differently. This is important as the proposed tool relies on self-assessment. These topics were discussed in the paper but the conclusion is not clear.

 

 

Lines 423-424. Similar as above, there was an issue regarding the word “explicitly” in some questions. How was this solved? It is not clear if “explicitly” necessarily refers to direct or measured impacts (or indirect and not measured impacts may be considered too). For instance, in question 159 about gender equality, which actions or intentions are expected for a “yes” response? Have you considered including examples in each response to guide project coordinators while answering the form?

 

Are there any other validation steps planned for the tool before its implementation? It would be interesting to investigate if response interpretation is similar among more diverse project coordinators.

 

Small comment:

Line 142: table 4 is referenced before other tables.

Author Response

The article “A practical approach to assessing the impact of citizen science towards the Sustainable Development Goals” brings an interesting framework to assess how citizen science projects contribuite to UN SDG. Impact categories are distinguished between monitoring SDG and achieving SDG, which significantly contribute to disentangling complex outcomes particularly related to citizen science projects. Hence, contributions can be practically assessed beyond the concept of citizen science as a non-standard data source. This tool will be incorporated in a more extensive impact assessment framework.  

Thank you for taking the time to review the manuscript and for your positive comments.  

Besides contributing significantly to this practical and conceptual issue, the presented framework was proposed and validated by a small number of researchers and project coordinators, which could lead to biased results. Even though bias sources related to project coordinators are acknowledged by the authors, it would be interesting to know if further validations of the tool are expected. Since there is a strong practical appeal to use the tool in a diverse range of projects, it would be recommended that validation processes include such diversity.  

We plan to test the tool as it is implemented in an online platform. This will include gathering feedback from at least 30 citizen-science projects which will hopefully help to reduce the bias you have identified. Reference to this testing has been added to the conclusions, lines 587-590. 

In general, the impact assessment framework is a relevant advance to assessing SDG in citizen science projects with objective criteria. Detailed comments are below. 

Lines 124-125: The interview process in two rounds is not clear. Why there were two rounds of interviews and what is the difference between the rounds? 

The interviews were split into two rounds to allow for iterative testing so that any issues identified with the questions in the first round of interviews could be addressed and retested in the second round of interviews. The difference between the two rounds was therefore that the questions asked were slightly different. We have moved the explanation on iterative testing to lines 122-124 so it comes before the description of the two interview rounds. 

Line 240 – Participants: What was the participants academic background? Were they from diverse institutions or mainly academic? Was there any participant from the social sciences? These details are important and should be included if possible. 

Thank you for pointing this out, it makes an interesting addition to the manuscript. The project coordinators worked in a variety of organisations. Five of the coordinators worked at private research institutes, three of the coordinators worked at universities, two of the coordinators were representing NGOs, and there was one coordinator working at a think tank and one at an SME. We have included these details in lines 260-263. A majority of the participants were from an academic background but as you can see other institutions were also represented. We will aim to achieve more diversity in future testing of the platform.  

Table 3 refers to the necessity of adapting language in some questions, which can also be influenced by the background of project coordinators. 

Thank you for this comment, it’s a very important point. The response to the wording of each question is obviously very influenced by the background of each project coordinator. This is the importance of the iterative testing that we undertook. Any changes suggested by participants could be compared against the comments from other interviewees to make sure their comments didn’t contradict the wishes of other interviewees. Any changes made to the wording could also be tested in future rounds of interviews to make sure that changes made for one participant didn’t negatively affect the question for other users. As you have already pointed out, the limitation with this approach is the relatively small number of projects which were involved in testing. This will be addressed in planned further rounds of testing. We have added more discussion around this point in the section on limitations, lines 540-551. 

Line 404: Bias could also arise from the authors perspectives, since you have selected the topics and questions at first (which comprehend a small portion of the targets and indicators). Since SDG targets and indicators were not equally assessed, this bias source could also be considered. Please comment whether mechanisms to avoid this kind of bias were implemented. 

Thank you for this observation. It is true that there will be a level of bias from our selection of the topics and questions. To try and mitigate this bias, the selection of questions was based on an extensive literature review of current impact assessment approaches in citizen science. Some additional questions were added based on the SDGs which we thought existing citizen science projects might be interested in. There might still be some level of bias in the selection of topics but hopefully, because the questions cover all the topics typically covered in citizen science impact assessment, there won’t be many relevant topics which have been omitted. Again, this is something that can be verified in future testing of the questions. We have added these details to the section on limitations, lines 516-523. 

Assessing SDG and impact in citizen science projects is a complex task. Hence, it would be important to specify whether a question is referring to a) intentions or implemented actions. b) short, medium or long-term impact. My concern is that depending on the project, coordinators may respond differently. This is important as the proposed tool relies on self-assessment. These topics were discussed in the paper but the conclusion is not clear. 

Thank you for highlighting this challenge in self-assessment tools. The questions are purposefully left open to interpretation so that they can be used by projects at different stages. This could create some inconsistency in how the questions are answered. We will try and minimise this risk by providing additional guidance on how users should approach answering the questions. We have added a discussion on this issue in the section on limitations, lines 552-565.  

Lines 423-424. Similar as above, there was an issue regarding the word “explicitly” in some questions. How was this solved? It is not clear if “explicitly” necessarily refers to direct or measured impacts (or indirect and not measured impacts may be considered too). For instance, in question 159 about gender equality, which actions or intentions are expected for a “yes” response? Have you considered including examples in each response to guide project coordinators while answering the form? 

There will always be a level of subjective interpretation of questions. However, we have done out best to reduce this in the way we have worded the questions. The use of the word explicitly is one example of this, and the comments from the workshop and interviews indicated that this reduced the ambiguity in the question. However, a level of subjectivity obviously still remains. Help and information text will therefore be displayed alongside the questions in the platform which should aid users' interpretation of the questions. We have included this as an example in the discussion we added to the section on limitations, lines 540-551. 

Are there any other validation steps planned for the tool before its implementation? It would be interesting to investigate if response interpretation is similar among more diverse project coordinators. 

As previously explained, we plan to continue to test the tool as it is implemented in an online platform. Reference to this testing has been added to the conclusions, lines 587-590. 

Small comment: Line 142: table 4 is referenced before other tables. 

Thank you for pointing out this error. We have removed the reference to table 4.  

Reviewer 3 Report

The paper is readable but long.

There are some minor points of grammar and punctuation needing attention. There is some repetition in places. In a few places some clarification or detail is needed.  Some references need completed or tidied.

Please see comments and suggestions on manuscript.

Comments for author File: Comments.pdf

Author Response

The paper is readable but long. 

There are some minor points of grammar and punctuation needing attention. There is some repetition in places. In a few places some clarification or detail is needed.  Some references need completed or tidied. 

Please see comments and suggestions on manuscript. 

Thank you for your comments and detailed review of the manuscript. We have corrected all the grammar and punctuations issues you identified and rechecked the rest of the manuscript for other errors. We have also updated the references as per your suggestions. More detailed responses to the comments you made on the manuscript are below: 

  • Hyphenation of citizen science: you picked up on the fact that in some cases we hyphenate citizen science and in others we don’t use a hyphen. Whilst this might seem inconsistent, we actually follow a rule to determine when to use a hyphen. If citizen science is being used as an adjective for another noun, we use a hyphen such as for citizen-science project. However, when citizen science is used by itself it does not need a hyphen. I hope this explanation is clear enough. 
  • Thank you for pointing out the repetition between lines 121-122 and 131 to 134. We have merged these two sentences to reduce the repetition, lines 121-124. 
  • Requested explanation for lines 374 to 376: the reference to weighting of the answers to questions in these lines is something that we are currently working on whilst developing the MICS platform. Because it is still under development and the weighting of answers isn’t relevant to the rest of the manuscript, we thought it would be easier to remove these lines instead of adding further explanation. 
  • Thank you for pointing out the lack of clarity in lines 437-440. We have rewritten this section to make our meaning clearer, lines 444-448. 
Back to TopTop