Next Article in Journal
Indonesian Scientists’ Behavior Relative to Research Data Governance in Preventing WMD-Applicable Technology Transfer
Next Article in Special Issue
Challenges of Promoting Open Science within the NI4OS-Europe Project in Hungary
Previous Article in Journal
Dynamically Updated Alive Publication Date
Previous Article in Special Issue
Adoption of Transparency and Openness Promotion (TOP) Guidelines across Journals
 
 
Article
Peer-Review Record

Measuring and Promoting the Success of an Open Science Discovery Platform through “Compass Indicators”: The GoTriple Case

Publications 2022, 10(4), 49; https://doi.org/10.3390/publications10040049
by Stefano De Paoli 1,*, Emilie Blotière 2, Paula Forbes 1 and Sona Arasteh-Roodsary 3
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Publications 2022, 10(4), 49; https://doi.org/10.3390/publications10040049
Submission received: 12 September 2022 / Revised: 18 November 2022 / Accepted: 5 December 2022 / Published: 8 December 2022

Round 1

Reviewer 1 Report

The paper presents a report of approaches taken to developing an Evaluation Framework in the context of the GoTriple project. There is a challenge in reviewing this kind of paper where a process that has taken place is reported in dividing examination of the reporting (which here is clear and well done) and the process actually followed (where I disagree with some of the approaches taken). My overall approach is that a paper that reports the actual process is appropriate.

I therefore divide my comments into two parts. The first of these relates to the reporting itself. In this case I think some more contextualisation and reference to other work is appropriate and I think this is important to improve the paper. The second part is to offer some reflections on the process and results. These are offered if the authors would like to consider the kinds of criticism that the might be made of the paper, if they wish to address them up front.

Reporting and References

The paper provides a clear and coherent description of the processes followed and the decisions made. The indicators are well described. It might be valuable to more explicitly distinguish between indicators currently in use and those which require further development. I would considering promoting the table in Appendix 1 into the main paper for clarity. Where indicators have been implemented I would additionally provide a detailed specification of the indicator (e.g. how is the "gender" indicator actually calculated, is it specified gender proportions compared to some benchmark, if so what benchmark is used).

Alongside this the paper could be more explicit about what has been implemented and what is planned in general. The public release was planned for recently, so this might be made clear by reference to a publicly available website, but for me it was not clear what stage the implementation of the actual indicators was for the various indicators.

Finally, I felt that the introduction and discussion did not address some of the core debates in monitoring of open science that seemed relevant. One issue that I think needs more examination is the debate around qualitative vs quantitative indicators. The paper opens with a description which seems to be indicate that it will argue for more qualitative indicators, but in fact the indicators presented are primarily quantitative in nature.

I would suggest including references to the work of the HuMetricsHSS group which seems particularly pertinent here. Another area is to consider critiques of the use of indicators in open science monitoring in general (eg Lizzie Gadd's observations on the topic) as well as the recent Agreement on Research Assessment and its emphasis on qualitative approaches.

Contextualising the choices made in the GoTriple process with that argument may help to clarify the distinction made between KPIs and Compass Indicators. Overall it seemed to me that the distinction was in how they would be used (as directions for continuous improvement rather than threshold acceptance KPIs). This distinction is useful, but I fear readers may expect a different distinction given the current quant/qual debates.

General comments on approach

I appreciate the authors and team have taken a pragmatic approach. I nonetheless would argue that some theoretical framing could have added the development and framing. In particular I would point to the HuMetricsHSS process of identifying values and how they can be "made legible" as a means of developing monitoring and evaluation approaches.

I would expect a criticism of the approach to be that in taking a broadly unstructured and iterative approach that there was the potential to miss issues that a framework might have surfaced. I would add to this that from my perspective adopting one of these approaches might have led to a consideration of more qualitative indicators. My sense is that there was a degree of identifying important issues and then seeking to find a tractable and measurable quantity linked to that, rather than a consideration of what the qualities of specific indicators or data sources might be.

There may be some ways that this kind of critique could be addressed in the paper - I don't see the comments above in this section as necessary revisions but as issues the authors may want to consider.

Finally, what I did appreciate was the focus on how indicators would be used. I felt like this section could be strengthened, and again, bringing the table into view in the main paper, perhaps expanding it somewhat to work as a summary of the implementation and use of the indicators would be valuable.

Author Response

ANSWER TO R1

Thank you for the very useful comments to our work, we provide below details of how we have answered/addressed them. Reviewer comments are in uppercase.

 

I THEREFORE DIVIDE MY COMMENTS INTO TWO PARTS. THE FIRST OF THESE RELATES TO THE REPORTING ITSELF. IN THIS CASE I THINK SOME MORE CONTEXTUALISATION AND REFERENCE TO OTHER WORK IS APPROPRIATE AND I THINK THIS IS IMPORTANT TO IMPROVE THE PAPER. THE SECOND PART IS TO OFFER SOME REFLECTIONS ON THE PROCESS AND RESULTS. THESE ARE OFFERED IF THE AUTHORS WOULD LIKE TO CONSIDER THE KINDS OF CRITICISM THAT THE MIGHT BE MADE OF THE PAPER, IF THEY WISH TO ADDRESS THEM UP FRONT.

Reporting and References

THE PAPER PROVIDES A CLEAR AND COHERENT DESCRIPTION OF THE PROCESSES FOLLOWED AND THE DECISIONS MADE. THE INDICATORS ARE WELL DESCRIBED. IT MIGHT BE VALUABLE TO MORE EXPLICITLY DISTINGUISH BETWEEN INDICATORS CURRENTLY IN USE AND THOSE WHICH REQUIRE FURTHER DEVELOPMENT. I WOULD CONSIDERING PROMOTING THE TABLE IN APPENDIX 1 INTO THE MAIN PAPER FOR CLARITY. WHERE INDICATORS HAVE BEEN IMPLEMENTED I WOULD ADDITIONALLY PROVIDE A DETAILED SPECIFICATION OF THE INDICATOR (E.G. HOW IS THE "GENDER" INDICATOR ACTUALLY CALCULATED, IS IT SPECIFIED GENDER PROPORTIONS COMPARED TO SOME BENCHMARK, IF SO WHAT BENCHMARK IS USED).

Thank you for these comments. The plan is that all indicators proposed will be in use (so there is no distinction “in use”/”further development”), but we will see them in action only when a sufficient number of users is registered to GoTriple, which we expect to happen in early 2023 when the user engagement will hopefully start to deliver its intended outcomes (as discussed in the paper). This is clearly said at the end of the findings section of the paper (lines 682-686), where now we added some extra details. One important distinction is between indicators that can be derived from the analytics tools (which are readily available) and those for which we will need to write ad-hoc scripts in order to extract the data and build the indicators. The first are easier to create (and in fact are already in place), than the second (which require some additional work). This is clear stated in the lines mentioned above and also now in the conclusion (page, lines 797 -799) we comment further on this aspect. These are the changes we made to address your comment.

Bringing the table into the main paper was considered as a possibility also earlier in the manuscript preparation, but it is quite a long one and it currently spans over 5 pages. It does not seem very practical for readers to have such a long table right in the middle of a paper. In the text we explain very clearly the 4 core categories and provide some examples for each of them and we keep referring readers to the table for more details. We would prefer to keep the long table in the appendix for facilitating reading of the manuscript.

You are rightly pointing at the benchmark aspect. We have started an internal discussion on this and the main table, where appropriate, mentions where some baseline data would be useful (see e.g. the gender or location indicators in Table 1) and there also are references to this aspect made in the findings section (see in between e.g. lines 610-628). However, as you rightly point the paper was not giving much detail about how we plan to approach this problem. To answer your comment, we have now added in the conclusion and additional paragraph that clarifies this aspect and the work we plan on doing. As it stands the identification of baseline data is one of the next steps for this work. This is now addressed in lines 799-820).

ALONGSIDE THIS THE PAPER COULD BE MORE EXPLICIT ABOUT WHAT HAS BEEN IMPLEMENTED AND WHAT IS PLANNED IN GENERAL. THE PUBLIC RELEASE WAS PLANNED FOR RECENTLY, SO THIS MIGHT BE MADE CLEAR BY REFERENCE TO A PUBLICLY AVAILABLE WEBSITE, BUT FOR ME IT WAS NOT CLEAR WHAT STAGE THE IMPLEMENTATION OF THE ACTUAL INDICATORS WAS FOR THE VARIOUS INDICATORS.

Thank you for the comment. We agree that it was not clear at what stage we are with the practical work, although some elements of this were provided in the section “Automation and Dashboard”. We have now clarified what has been done and what is still ongoing. In particular for creating part of the indicators we will need to develop some ad-hoc python scripts and ad-hoc visualisation and the work on this has just started. One of the programmers started to work on this at the end of September and we expect to have a first working internal version by the end of 2022, with possible roll out in early 2023. This is now written clearly in the paper in lines 681-686 and 797-800.

 

FINALLY, I FELT THAT THE INTRODUCTION AND DISCUSSION DID NOT ADDRESS SOME OF THE CORE DEBATES IN MONITORING OF OPEN SCIENCE THAT SEEMED RELEVANT. ONE ISSUE THAT I THINK NEEDS MORE EXAMINATION IS THE DEBATE AROUND QUALITATIVE VS QUANTITATIVE INDICATORS. THE PAPER OPENS WITH A DESCRIPTION WHICH SEEMS TO BE INDICATE THAT IT WILL ARGUE FOR MORE QUALITATIVE INDICATORS, BUT IN FACT THE INDICATORS PRESENTED ARE PRIMARILY QUANTITATIVE IN NATURE.

Thank you for this comment. We agree that the text might have given the impression we were developing qualitative metrics, but this is not the case. We have changed the wording in the introduction and clarified our position and added references to this debate. This is now addressed in the introduction (lines 122-126) and also in the discussion

To note that in the discussion (lines 767-777), thanks to your comment, we clarify better that we are adopting quantitative measures because of practical reasons. We are all in favour of qualitative research and have run a substantial amount of interviews, codesign etc. for GoTriple. However, collecting qualitative data is costly and requires resources which may not be available with the end of funding. An automated approach reusing existing data from the platform is comparably cheaper to run, once this is set up. We are trying to set this up now, while we still have the funding, so that it will be in place and usable after that. We clarified this point in the findings (see lines 663-666).

 

I WOULD SUGGEST INCLUDING REFERENCES TO THE WORK OF THE HUMETRICSHSS GROUP WHICH SEEMS PARTICULARLY PERTINENT HERE. ANOTHER AREA IS TO CONSIDER CRITIQUES OF THE USE OF INDICATORS IN OPEN SCIENCE MONITORING IN GENERAL (EG LIZZIE GADD'S OBSERVATIONS ON THE TOPIC) AS WELL AS THE RECENT AGREEMENT ON RESEARCH ASSESSMENT AND ITS EMPHASIS ON QUALITATIVE APPROACHES.

CONTEXTUALISING THE CHOICES MADE IN THE GOTRIPLE PROCESS WITH THAT ARGUMENT MAY HELP TO CLARIFY THE DISTINCTION MADE BETWEEN KPIS AND COMPASS INDICATORS. OVERALL IT SEEMED TO ME THAT THE DISTINCTION WAS IN HOW THEY WOULD BE USED (AS DIRECTIONS FOR CONTINUOUS IMPROVEMENT RATHER THAN THRESHOLD ACCEPTANCE KPIS). THIS DISTINCTION IS USEFUL, BUT I FEAR READERS MAY EXPECT A DIFFERENT DISTINCTION GIVEN THE CURRENT QUANT/QUAL DEBATES.

 

Thank you for pointing us to this gap in our analysis of the literature. These contributions are now discussed in the Literature review section and connected with the other literature we analyse. We read these materials carefully. The HuMetricsHSS is interesting as it is directly connected with social sciences and humanities. However these work seem to have little overlap with what we are trying to do. We are not trying to develop indicators to evaluate research or researchers. More modestly we are seeking to assess the success of building an online community. The suggestion of unpacking values and the derive evaluation processes from that is a potentially interesting approach, but it seems to stem from a critique on how universities operate i.e. they claim to support something on paper (e.g. on their mission statements etc.) but then in practice they are not. We are not sure this applies to us in the way it is framed in the HuMetricsHSS whitepaper. In any case all this literature and the potential connection with the evaluation of researchers are now included in the literature review and we reflect upon these inputs (lines 235-270)

General comments on approach

I APPRECIATE THE AUTHORS AND TEAM HAVE TAKEN A PRAGMATIC APPROACH. I NONETHELESS WOULD ARGUE THAT SOME THEORETICAL FRAMING COULD HAVE ADDED THE DEVELOPMENT AND FRAMING. IN PARTICULAR I WOULD POINT TO THE HUMETRICSHSS PROCESS OF IDENTIFYING VALUES AND HOW THEY CAN BE "MADE LEGIBLE" AS A MEANS OF DEVELOPING MONITORING AND EVALUATION APPROACHES.

I WOULD EXPECT A CRITICISM OF THE APPROACH TO BE THAT IN TAKING A BROADLY UNSTRUCTURED AND ITERATIVE APPROACH THAT THERE WAS THE POTENTIAL TO MISS ISSUES THAT A FRAMEWORK MIGHT HAVE SURFACED. I WOULD ADD TO THIS THAT FROM MY PERSPECTIVE ADOPTING ONE OF THESE APPROACHES MIGHT HAVE LED TO A CONSIDERATION OF MORE QUALITATIVE INDICATORS. MY SENSE IS THAT THERE WAS A DEGREE OF IDENTIFYING IMPORTANT ISSUES AND THEN SEEKING TO FIND A TRACTABLE AND MEASURABLE QUANTITY LINKED TO THAT, RATHER THAN A CONSIDERATION OF WHAT THE QUALITIES OF SPECIFIC INDICATORS OR DATA SOURCES MIGHT BE.

THERE MAY BE SOME WAYS THAT THIS KIND OF CRITIQUE COULD BE ADDRESSED IN THE PAPER - I DON'T SEE THE COMMENTS ABOVE IN THIS SECTION AS NECESSARY REVISIONS BUT AS ISSUES THE AUTHORS MAY WANT TO CONSIDER.

 

Thank you very much for your comments and for suggesting us to look at this discussion on research assessment (see previous response for how we approached this). One thing to notice is in the paper we are not concerned with assessing research, nor directly assessing researchers. Our task is rather modest, that is define the success of a digital platform. Clearly for us success will mean mainly one thing: to get a sufficient number of users to sign-up and/or use GoTriple so that a thriving community can be established, this is the essence of everything for us. Within this umbrella then we are asking ourselves problems of the like: (if we have this thriving user community), what is its composition? Is it equal? Is it inclusive? Are people doing things together? Are the features (like the multilingualism) supporting this equality or inclusion? Are we fostering open science? If we are not satisfied about e.g. the equality or the diversity, then we will strive to take actions. In practical terms our effort is in fact more similar to what happens in Open Source projects than in e.g. evaluating science and that’s why we also have touched upon that in the literature section.

Assessing research is a different level/different problems. After having evaluated your suggestions, we have included in the literature the material (see lines 235-270). There are useful insights/useful parallels to learn from it and this is now acknowledged.

The contribution on values is interesting and so is the one on using a framework. There is however the problem of which framework one should select. After having read the HuMetricsHSS whitepaper, we do not think that what is proposed there is the right approach for us, we are not a University and we are not evaluating researchers.

 

FINALLY, WHAT I DID APPRECIATE WAS THE FOCUS ON HOW INDICATORS WOULD BE USED. I FELT LIKE THIS SECTION COULD BE STRENGTHENED, AND AGAIN, BRINGING THE TABLE INTO VIEW IN THE MAIN PAPER, PERHAPS EXPANDING IT SOMEWHAT TO WORK AS A SUMMARY OF THE IMPLEMENTATION AND USE OF THE INDICATORS WOULD BE VALUABLE.

Thank you for this comment. We are working on their implementation as we speak also because we just started our user engagement campaign i.e. we do not have yet concrete data about the indicators. As soon as the community starts to pick up in terms of numbers, we will be able to start using the indicators and assess them. Our plan is for a follow-up paper with an initial empirical assessment of the merits of our indicators. We added some details on the forthcoming work in the discussion/conclusion section.

 

Reviewer 2 Report

The authors of this paper presented the process of designing an Open Science discovery platform called the GoTriple. The topic of this paper is interesting. However, it did not present a study in a way of a scientific research paper. In the introduction section, the authors included too much background information about open science and did not discuss the key research question they would like to answer in their research. The paper did not include a literature review section about present studies related to open science and open science platforms. In the Materials and Methods section, the authors did not apply a proper research method to explain the whole design of this study. The Results section is more like a record of the meetings during the development of the GoTriple platform. All these above points lead to a very weak discussion and conclusion section, which limited theoretical and practical implications that can be seen in this paper. The authors need to rethink about the key research aim and question for this paper and carry out a proper research design with scientific research methods.

Author Response

ANSWER TO R2

Thank you for the comments, we provide below details of how we have answered/addressed them. The reviewer comments are in uppercase.

 

THE TOPIC OF THIS PAPER IS INTERESTING. HOWEVER, IT DID NOT PRESENT A STUDY IN A WAY OF A SCIENTIFIC RESEARCH PAPER. […] IN THE MATERIALS AND METHODS SECTION, THE AUTHORS DID NOT APPLY A PROPER RESEARCH METHOD TO EXPLAIN THE WHOLE DESIGN OF THIS STUDY.

Thank you for this comment. As we discuss in the text, the paper is outcome of the process of making the platform and as such it did not follow a rigid or traditional research design approach (i.e.  hypothesis=>experiment => validation). The process we adopted is “practice-led” where people do/make things and in making these things they seek to learn new knowledge. This approach is described with details in the methods section and the rationale for this is explained, we have in the revised version included additional clarifications on the scientific merits of the approach and its differences with other forms of research design (see lines 312-318). The practice-led approach is a rather emergent approach and it is closer to art and design research. We have clarified now in the manuscript that the practice-led approach differs from the traditional way of designing a research.

 

IN THE INTRODUCTION SECTION, THE AUTHORS INCLUDED TOO MUCH BACKGROUND INFORMATION ABOUT OPEN SCIENCE AND DID NOT DISCUSS THE KEY RESEARCH QUESTION THEY WOULD LIKE TO ANSWER IN THEIR RESEARCH. […] THE PAPER DID NOT INCLUDE A LITERATURE REVIEW SECTION ABOUT PRESENT STUDIES RELATED TO OPEN SCIENCE AND OPEN SCIENCE PLATFORMS.

Thank you for this comment, and especially for pointing out we have not been sufficiently explicit about the research question. This is very useful.

There is a literature review in the paper, but we structured the literature within the Introduction section because this is how the publisher (MDPI) is suggesting to organise a paper. Following also a comment from reviewer 3, we have now divided the Introduction from the Literature (now they are two separate sections) and this makes the narrative and structure much clearer. Note also that following a comment from R3 we also have shortened the paper throughout including in the literature section. We also added new literature items following R1 comment. The Literature section starts at line 134 and ends at line 302.

Moreover in the introduction we have formulated more clearly and explicitly the research problem we are trying to address (please see lines 62-66). The problem as formulated now ties much better with how we discuss the results, and the comment helped us improving the readability of the work significantly.

We hope that these changes address your comment.

 

ALL THESE ABOVE POINTS LEAD TO A VERY WEAK DISCUSSION AND CONCLUSION SECTION, WHICH LIMITED THEORETICAL AND PRACTICAL IMPLICATIONS THAT CAN BE SEEN IN THIS PAPER. THE AUTHORS NEED TO RETHINK ABOUT THE KEY RESEARCH AIM AND QUESTION FOR THIS PAPER AND CARRY OUT A PROPER RESEARCH DESIGN WITH SCIENTIFIC RESEARCH METHODS.

In our understanding, this comment relates with how we interpret the meaning of “scientific research methods”. As stated in the paper our work is practice-led, inductive and emergent. There is nothing we can do to transform this work into a “hypothesis testing” research, as this research is outcome of designing a platform, pretty much from scratch. In platform design one could adopt a positivist approach, but also an inductive/interpretative approach. We adopted the second (based on user research) and much of the work is based on codesign and in trying out things to see how they work. We hope that the reviewer can accept that the ontological and epistemological assumptions of our manuscript are emergent and practical in nature.

As for the practical implications, to address this comment we have now strengthened the presentation of the lessons learned and improved the discussion and conclusion.

 

Reviewer 3 Report

The paper is interesting and should be published rapidly because of the launch of the GoTriple project. However, some aspects should be improved before publishing.

The paper is rich and long - perhaps too long for an article, it's more like a short project report. I would suggest to shorten it a little bit but this should be the editor's decision.

1 Introduction : The first section must be divided into two sections, a genuine introduction which states clearly the objective of the paper, makes a distinction between the paper's and the project's objective and mentions the general challenge of the topic; and a state of the art section with a review of former studies, projects and so on. This state of the art should be structured because the authors review different topics.

2. Materials and Methods : Some parts, especially at the beginning should move to the state of the art section. 

3. Results: very rich but a little bit lengthy; could be shortened. Perhaps finish the section with a short synthesis of all results (table, list...).

4. Discussion and Conclusion : I would suggest to separate both sections. First, discuss the results; and then, in the Conclusion, describe the "lessons learned" and finish the paper with perspectives for further development (project) and research.

Most of this is about structuring and shortening (minor revision), not about adding new content. This should not take much time. Again, it is a good paper.    

Author Response

ANSWER TO R3

Thank you for the comments, we provide below details of how we have answered/addressed them. The reviewer comment are in uppercase.

1 INTRODUCTION : THE FIRST SECTION MUST BE DIVIDED INTO TWO SECTIONS, A GENUINE INTRODUCTION WHICH STATES CLEARLY THE OBJECTIVE OF THE PAPER, MAKES A DISTINCTION BETWEEN THE PAPER'S AND THE PROJECT'S OBJECTIVE AND MENTIONS THE GENERAL CHALLENGE OF THE TOPIC; AND A STATE OF THE ART SECTION WITH A REVIEW OF FORMER STUDIES, PROJECTS AND SO ON. THIS STATE OF THE ART SHOULD BE STRUCTURED BECAUSE THE AUTHORS REVIEW DIFFERENT TOPICS.

In the original submission we kept the introduction and literature together since this is how the publisher (MDPI) is suggesting to structure a paper in their guidelines. We fully agree with you that it is beneficial to divide the material in two sections and (so long the publisher is fine). We have now divided the introduction from the literature in the revised paper.

  1. MATERIALS AND METHODS : SOME PARTS, ESPECIALLY AT THE BEGINNING SHOULD MOVE TO THE STATE OF THE ART SECTION. 

We have moved the opening paragraph of the Methods at the end of the literature review and also shortened the text of this section.

  1. RESULTS: VERY RICH BUT A LITTLE BIT LENGTHY; COULD BE SHORTENED. PERHAPS FINISH THE SECTION WITH A SHORT SYNTHESIS OF ALL RESULTS (TABLE, LIST...).

We have shortened the paper throughout (not just the results), with a focus on making it leaner and more readable. We have decided not to add additional tables at the end of each section as this would increase the length of the paper even more. Note however that we also had to add some additional material, in particular in the literature review and in the discussion, to respond to Reviewer 1 comments. In the end we made several cuts to the manuscript and the paper is certainly much shorter than in the first submission, but the new additions answering the review 1 have limited our effort to shorten as much as wanted after this comment.

  1. DISCUSSION AND CONCLUSION : I WOULD SUGGEST TO SEPARATE BOTH SECTIONS. FIRST, DISCUSS THE RESULTS; AND THEN, IN THE CONCLUSION, DESCRIBE THE "LESSONS LEARNED" AND FINISH THE PAPER WITH PERSPECTIVES FOR FURTHER DEVELOPMENT (PROJECT) AND RESEARCH.

Thank you for this comment. We originally kept discussion and conclusion together again because this is how the publisher suggest structuring a paper in their guidelines. We have now divided them in two sections, and we agree this makes the final part of the manuscript better.

Round 2

Reviewer 2 Report

The authors of this paper presented the process of designing an Open Science discovery platform called the GoTriple. In the introduction, the authors need to explain the difference between the created portal (GoTriple) and existing open science platforms. In the state-of-the-art section, the authors did not provide a review of existing open science platforms. What are the problems of existing platforms? Why did the authors have to build a completely new one? Are there any designs from other platforms they borrowed? Many published evaluation frameworks focusing on open science and open data already include elements/indicators for quality, usability, diversity, etc. Why did the authors choose to ignore all these related studies and just brainstorm compass indicators? I am still not persuaded by the authors about how they developed the indicators for an open science platform, especially since they did not apply a proper scientific research method to derive the indicators, they only included one sample platform in their study, and the indicators are not tested on other platforms for the effectiveness and reliability. In addition, many references used in the paper are relatively old and are not scientific research papers.

Author Response

Please see attachment

Author Response File: Author Response.pdf

Back to TopTop