Next Article in Journal
Did the Research Faculty at a Small Canadian Business School Publish in “Predatory” Venues? This Depends on the Publishing Blacklist
Next Article in Special Issue
OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science
Previous Article in Journal
Publish-and-Flourish: Using Blockchain Platform to Enable Cooperative Scholarly Communication
Previous Article in Special Issue
The Two-Way Street of Open Access Journal Publishing: Flip It and Reverse It
 
 
Review
Peer-Review Record

Ten Hot Topics around Scholarly Publishing

Publications 2019, 7(2), 34; https://doi.org/10.3390/publications7020034
by Jonathan P. Tennant 1,*, Harry Crane 2, Tom Crick 3, Jacinto Davila 4, Asura Enkhbayar 5, Johanna Havemann 6, Bianca Kramer 7, Ryan Martin 8, Paola Masuzzo 9, Andy Nobes 10, Curt Rice 11, Bárbara Rivera-López 12, Tony Ross-Hellauer 13, Susanne Sattler 14, Paul D. Thacker 15 and Marc Vanholsbeeck 16
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Publications 2019, 7(2), 34; https://doi.org/10.3390/publications7020034
Submission received: 11 March 2019 / Revised: 23 April 2019 / Accepted: 8 May 2019 / Published: 13 May 2019
(This article belongs to the Special Issue New Frontiers for Openness in Scholarly Publishing)

Round 1

Reviewer 1 Report

Dear authors,

Thank you for the opportunity to read your manuscript. The paper gives a complete and critical overview of all the problems of open scholarly publishing. It is obvious that the authors are experts in the field, with lots of experience and knowledge about open access. The text is a systematic review paper that gives an insight in the “ten myths” of oa publishing. Although I think the paper could be published as it is, I have some minor suggestions:

Line 134 (about the benefits of preprints) – maybe some earlier researches on OA citation advantage could be mentioned?

Line 156 – although most of the readers would know what altmetrics is, maybe it could be explained in a few words?

Line 285 – the two implications seem to be very similar and might be consider as one implication.

Line 342 – it is a very interesting proposal to put authors in charge of their own peer review. In line 370 (researchers are more than responsible and competent enough to ensure their own quality control) – that would be the case in a perfect world, but what about ethical issues (conflict of interests etc.)?

Figure 4 – the table and the chart present the same results, the table would be enough.

Figure 5 – I understand that green arrows mean YES, and the red ones NO, but maybe it will not be clear to all, especially to those who print the text in black and white.

Best regards


Author Response

Dear authors,

Thank you for the opportunity to read your manuscript. The paper gives a complete and critical overview of all the problems of open scholarly publishing. It is obvious that the authors are experts in the field, with lots of experience and knowledge about open access. The text is a systematic review paper that gives an insight in the “ten myths” of oa publishing.

Thank you for the kind words and feedback! We should note that we have changed the title of the paper and tone in many places now to move away from ‘myths’ and more about addressing 10 core topics within scholarly communication. This was based on comments from the reviewers, as well as many which we received on the preprint version of this article.

Although I think the paper could be published as it is, I have some minor suggestions:

Line 134 (about the benefits of preprints) – maybe some earlier researches on OA citation advantage could be mentioned?

Added in a citation now for this.

Line 156 – although most of the readers would know what altmetrics is, maybe it could be explained in a few words?

We have added a glossary now to make this easier.

Line 285 – the two implications seem to be very similar and might be consider as one implication.

We have edited this for clarity.

Line 342 – it is a very interesting proposal to put authors in charge of their own peer review. In line 370 (researchers are more than responsible and competent enough to ensure their own quality control) – that would be the case in a perfect world, but what about ethical issues (conflict of interests etc.)?

This is an excellent point. We have added in this potential caveat now.

Figure 4 – the table and the chart present the same results, the table would be enough.

Some readers might prefer one or the other, so we have kept both in for now.

Figure 5 – I understand that green arrows mean YES, and the red ones NO, but maybe it will not be clear to all, especially to those who print the text in black and white.

We have indicated now that the vertical arrows imply no and the horizontal arrows imply yes to make this easier for readers.


Reviewer 2 Report

Dear colleagues, I appreciate your work on this paper. I think it is helpful especially for new members of the field of scholarly communication to have this kind of background to have conversations with their colleagues surrounding these myths. My suggestions below offer sometimes very specific advice, sometimes questions about the framing and relation of topics to one another, and are all intended to help strengthen the manuscript. Overall I believe a review with this framing is helpful, and with some of the edits below, would be a good contribution to the literature. 

Figure 3 is very low quality and difficult to read. If important to the paper, recommend revising and including higher resolution figures. 

In the section on peer review, there is a tension brought up that is not explicitly addressed, particularly in lines 240-241. Here you write that the reviewers are desperate to stop publication of the studies, but their concerns were apparently not listened to by the publishers. Therefore, it is not necessarily the fault of the reviewers for not catching faulty science, but for publishers who want to publish highly funded studies that relate to big topics like American Football. I believe this distinction needs to be addressed more clearly in this section. Yes, sometimes reviewers do not catch the faults; but how can we know (without open peer review, that is) whether they did catch them but their concerns were ignored? 

The paragraph on ghostwritten papers does not seem to belong in this section on peer review. In a blind peer review, you do not know the author anyway. What is failing here is the conflict of interest not being exposed. I believe this may belong better in the next section on myth #4, in the section where authors are invested in the quality of the paper. This could be a counterargument to your arguments on that myth, however, so this move will take some finesse. 

I have a large concern about the discussion on Myth #4. Here you seem to argue that allowing for more "junk" to get out there fosters a culture of doubt and gives researchers a chance to practice information literacy and criticism. However, in the previous section, you have just argued that peer review fails to catch junk articles that then get out, and the public cannot distinguish between real science and bad science. These two sections are at odds with each other, and you do not address the issue of more "junk" being published lowering the public's opinion of science (even if scientists themselves may be doing better review). 

The discussion of Myth 6 on copyright transfer may benefit from a historical look (as your other sections have done). Copyright transfer may have at one point been necessary when journals were printed and sent around on an author's behalf. However, now that is hardly necessary. The discussion in this section about the pressure to sign and the lack of researcher royalties is good. 

In the Myth 7 section, you do some work to perpetuate the myth that APC is the most recognized model for publishing by not mentioning the models that the majority (?) of OA journals publish under. If APC publishing is such a minority, what is the actual majority? This would be very helpful information to include here (especially as this is a common failing of many articles discussing the problems with APCs, so this would set your article apart). 

I'm not sure what Myth 9 has to do, specifically, with open access publishing. The argument and discussion are accurate, but this is more a critique of all scholarly publishing, not just open access.  

I close with encouragement to consider these revisions as ways to improve the work. 

Author Response

Dear colleagues, I appreciate your work on this paper. I think it is helpful especially for new members of the field of scholarly communication to have this kind of background to have conversations with their colleagues surrounding these myths. My suggestions below offer sometimes very specific advice, sometimes questions about the framing and relation of topics to one another, and are all intended to help strengthen the manuscript. Overall I believe a review with this framing is helpful, and with some of the edits below, would be a good contribution to the literature.

Thank you to the referee for their kind words and constructive feedback, we appreciate it. We have used them as much as we feel was necessary to strengthen the paper, while acknowledging that this is still a very dynamic space for discussion. For this, we have reframed the article more that we are addressing topics than ‘dispelling myths’, to highlight that what we have written is by far not the final say on the matter.

Figure 3 is very low quality and difficult to read. If important to the paper, recommend revising and including higher resolution figures.

We have redrafted this figure now to be easier to read.

In the section on peer review, there is a tension brought up that is not explicitly addressed, particularly in lines 240-241. Here you write that the reviewers are desperate to stop publication of the studies, but their concerns were apparently not listened to by the publishers. Therefore, it is not necessarily the fault of the reviewers for not catching faulty science, but for publishers who want to publish highly funded studies that relate to big topics like American Football. I believe this distinction needs to be addressed more clearly in this section. Yes, sometimes reviewers do not catch the faults; but how can we know (without open peer review, that is) whether they did catch them but their concerns were ignored?

Excellent point. We have added some extra text now to describe this tension better.

The paragraph on ghostwritten papers does not seem to belong in this section on peer review. In a blind peer review, you do not know the author anyway. What is failing here is the conflict of interest not being exposed. I believe this may belong better in the next section on myth #4, in the section where authors are invested in the quality of the paper. This could be a counterargument to your arguments on that myth, however, so this move will take some finesse.

It’s an interesting problem, because again the real question is what the purpose of peer review is. If it is not catching ghost-writing and conflicts of interest, and letting such potentially biased studies being passed off as scholarly rigorous, then that is a problem. We have added this section to be clearer about this now.

I have a large concern about the discussion on Myth #4. Here you seem to argue that allowing for more "junk" to get out there fosters a culture of doubt and gives researchers a chance to practice information literacy and criticism. However, in the previous section, you have just argued that peer review fails to catch junk articles that then get out, and the public cannot distinguish between real science and bad science. These two sections are at odds with each other, and you do not address the issue of more "junk" being published lowering the public's opinion of science (even if scientists themselves may be doing better review).

We understand the tension the reviewer describes, and it is a difficult one to see through given the complexity of peer review, and its intended purpose. What we have done is edited this now to describe the potential issues associated with the publication of ‘junk research’ on different audiences (i.e., experts versus non-experts).

The discussion of Myth 6 on copyright transfer may benefit from a historical look (as your other sections have done). Copyright transfer may have at one point been necessary when journals were printed and sent around on an author's behalf. However, now that is hardly necessary. The discussion in this section about the pressure to sign and the lack of researcher royalties is good.

Ah yes, this is a great point, thanks. Have now added this to the introductory paragraph for extra context.

In the Myth 7 section, you do some work to perpetuate the myth that APC is the most recognized model for publishing by not mentioning the models that the majority (?) of OA journals publish under. If APC publishing is such a minority, what is the actual majority? This would be very helpful information to include here (especially as this is a common failing of many articles discussing the problems with APCs, so this would set your article apart).

Excellent point. We have edited this now to be clearer. It is the majority of journals, but a relative minority of actual articles, that are APC-funded.

I'm not sure what Myth 9 has to do, specifically, with open access publishing. The argument and discussion are accurate, but this is more a critique of all scholarly publishing, not just open access.

Good point. Have changed the title now. Again.

I close with encouragement to consider these revisions as ways to improve the work.

They have all been seriously considered, and integrated! Thank you.

Reviewer 3 Report

Great article. It clearly explains and discusses some of the more spread misunderstandings of scholarly communication practices in digital era. Biblography review is excellent.

These are my comments:


Introduction: The argument on the polemic aspects that Plan S and AmeliCA point out is understandable. However, they are not comparable since Plan S  is a 'official' initiative that is intempted to be applicable/mandatory for a group of countries and AmeliCA is a collective community-based initiative that is not necessary conceived to change the current governments-driven scholarly communication system.


Myth 1: This myth is applicable for some disciplines, but I wouldn't say it is similar for all because proliferation of preprints servers and even the practice of posting preprints is not still common in Humanities or some Social Sciences. Also, this practice is common for some languages, especially English, but it is significantly less popular among other languages.


Myth 3 and myth 4 are very closely related. Myth 3 is very clear, although the examples provided to discuss the effectiveness of peer review are US-centered, which could be misinterpreted. However, myth 4 is a little confusing. On one hand, when authors propose "Putting authors in charge of their own peer review" they are assuming that authors are experienced and highly skilled, which is not true for all cases, i.e. authors from developing countries sometimes are in the process of getting and improving research skills. I see in those cases this is hardly applicable. On the other hand, authors argue that readers have or could develop a "dose of skepticism", which again could be true for readers from developed countires, where levels and practices of readership are considerable better than in developing countries.


Myth 5: please see this reference: https://librarypublishing.org/predatory-publishing-global-south-perspective/

it could help to strengthen the argument on prejudices in predatory publishing (geopolitical and commercial context)


Myth 6: authors do not mention licensing through Creative Commons licenses at all, which is surprising since it is widely spread that this is an alternative to traditional copyright transfer. Many OA journals use CC licenses (even Plan S recommend to use the most open of them) and it would be considered that part of the myth is which type of license best fits OA purposes and/or if CC licenses "substitute" copyright national laws. It is still confusing for many editors the terms of these licenses and the implications of the use of each one in scientific papers. Although this is not the main point of this myth, I recommend to introduce the discussion as part of the description of the issue.


Myth 8: besides Sherpa/Romeo, other similar databases should be mentioned, for instance Diadorim (Brazil), Dulcinea (Spain), Aura (Latin America).


In conclusions section authors could return to the meaning of "myth" to explain why these issues are strongly held in the scholarly communication environment and why they could be "misrepresenting the truth", as Oxford dictionary defines myth.


Author Response

Great article. It clearly explains and discusses some of the more spread misunderstandings of scholarly communication practices in digital era. Biblography review is excellent. These are my comments:

Thank you to the reviewer for their kind comments. We have given careful consideration to each of the comments below, and integrated changes where needed.

Introduction: The argument on the polemic aspects that Plan S and AmeliCA point out is understandable. However, they are not comparable since Plan S  is a 'official' initiative that is intempted to be applicable/mandatory for a group of countries and AmeliCA is a collective community-based initiative that is not necessary conceived to change the current governments-driven scholarly communication system.

Have simply removed the reference to AmeliCA now for this.

Myth 1: This myth is applicable for some disciplines, but I wouldn't say it is similar for all because proliferation of preprints servers and even the practice of posting preprints is not still common in Humanities or some Social Sciences. Also, this practice is common for some languages, especially English, but it is significantly less popular among other languages.

Good point. We have added this in now as a concluding sentence to this section.

Myth 3 and myth 4 are very closely related. Myth 3 is very clear, although the examples provided to discuss the effectiveness of peer review are US-centered, which could be misinterpreted. However, myth 4 is a little confusing. On one hand, when authors propose "Putting authors in charge of their own peer review" they are assuming that authors are experienced and highly skilled, which is not true for all cases, i.e. authors from developing countries sometimes are in the process of getting and improving research skills. I see in those cases this is hardly applicable. On the other hand, authors argue that readers have or could develop a "dose of skepticism", which again could be true for readers from developed countires, where levels and practices of readership are considerable better than in developing countries.

This is an excellent point, also closely related to the comments from another reviewer. We have now added text to indicate that these issues impact ‘experts’ and ‘non-experts’ in different ways.

Myth 5: please see this reference:https://librarypublishing.org/predatory-publishing-global-south-perspective/it could help to strengthen the argument on prejudices in predatory publishing (geopolitical and commercial context).

Thank you for this. We have already included all of the references within that article in the present discussion, so think it should be okay to leave out for now.

Myth 6: authors do not mention licensing through Creative Commons licenses at all, which is surprising since it is widely spread that this is an alternative to traditional copyright transfer. Many OA journals use CC licenses (even Plan S recommend to use the most open of them) and it would be considered that part of the myth is which type of license best fits OA purposes and/or if CC licenses "substitute" copyright national laws. It is still confusing for many editors the terms of these licenses and the implications of the use of each one in scientific papers. Although this is not the main point of this myth, I recommend to introduce the discussion as part of the description of the issue.

We have added in a small section about how CC licenses can be used in this effect.

Myth 8: besides Sherpa/Romeo, other similar databases should be mentioned, for instance Diadorim (Brazil), Dulcinea (Spain), Aura (Latin America).

We have added in a reference to these databases now.

In conclusions section authors could return to the meaning of "myth" to explain why these issues are strongly held in the scholarly communication environment and why they could be "misrepresenting the truth", as Oxford dictionary defines myth.

We have altered the title and tone of the article now to indicate that we are more ‘addressing topics’ than ‘dispelling myths’.

Round 2

Reviewer 2 Report

Thank you for seriously and thoroughly taking my considerations into account, and for your response letter. I think the reframing of "hot topics" is much better and changes the narrative in a beneficial way. Now, I think this will be helpful for many people who work in the publishing and scholarly communication world and may be a useful teaching or skilling-up tool for those looking to enhance their knowledge. Some of the additions that you have made, while seemingly small, go a long way to telling a fuller picture. I recommend only a basic copy-edit for the journal, but now the content is much better. 

Back to TopTop