Special Issue "Misconduct in Scientific Publishing"

Quicklinks

A special issue of Publications (ISSN 2304-6775).

Deadline for manuscript submissions: closed (28 February 2014)

Special Issue Editor

Guest Editor
Prof. Dr. R. Grant Steen

President, MediCC!, LLC, Medical Communications Consultants Chapel Hill, North Carolina 27517, USA
Interests: scientific retraction; research misconduct; medical misinformation; retraction as a proxy for misconduct; neuroepistemology; cognitive biases associated with misinformation

Special Issue Information

Dear Colleagues,

Scientists believe—or at least profess to believe—that science is a process of iterative approach to objective truth. Failed experiments are supposed to serve as fodder for successful experiments, so that clouded thinking can be clarified. Observations fundamentally true should find support, while observations flawed in some way are supplanted by better observations.  Why then would anyone think that scientific fraud can succeed?

Recently, there has been an alarming increase in the number of papers in the refereed scientific literature that have been retracted for fraud (e.g., data fabrication or data falsification).  Do fraudulent authors imagine that their fraud will not be exposed?  Do they see the benefits of fraud as so attractive that they are willing to risk exposure?  Or do some scientists doubt the process itself, believing themselves to be immune to the failure to replicate?

It may be true that most scientists who fabricate or falsify data believe that they know the “right” answer in advance of the data and that they will soon have the data necessary to support their favored answer.  It may therefore seem legitimate to fabricate; such scientists may believe that they are simply saving time by cutting corners.  They may even believe that they are serving science and the greater good by pushing a bold “truth” into print.  But humans are so prone to bias that the process of scientific discovery has been developed specifically to insulate scientists from the malign effects of wishful thinking.  Measurement validation, hypothesis testing, random allocation, blinding of outcome assessment, replication of results, referee and peer review, and open sharing of trade secrets are keys to establishing the truth of a scientific idea.  When those processes are subverted, scientific results become prone to retraction.

This Special Issue—Misconduct in Scientific Publishing—will explore the surge in scientific retractions.  Are retractions a valid proxy for research misconduct?  Does the increase in retractions mean that there has been an increase in misconduct?  How can we measure misconduct objectively?  Are surveys that characterize scientific behavior valid or do they misrepresent the prevalence of misconduct?

I look forward to your contributions and your insight on this important topic.

Prof. Dr. R. Grant Steen,
Guest Editor

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you have registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. A limit of 3,000 words is encouraged for the body of the paper (excluding Abstract, References, Tables, and Figures).  References should number between 20 and 50, with an emphasis on new literature that has been published within the past 5 years.  Papers will be published continuously (as soon as they are accepted) and will be listed together on the special issue website. Research articles, review papers, brief editorials, and short communications are invited.

Submitted manuscripts should not have been published previously nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors is available on the Instructions for Authors page, together with other relevant information for manuscript submission. Publications is an international peer-reviewed Open Access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. For the first couple of issues the Article Processing Charge (APC) will be waived for well-prepared manuscripts. English correction and/or formatting fees of 250 CHF (Swiss Francs) will be charged in certain cases for those articles accepted for publication that require extensive additional formatting and/or English corrections.

Keywords

  • research ethics
  • scientific retraction
  • neuroepistemology
  • research misconduct
  • fabrication
  • falsification
  • data plagiarism

Published Papers (8 papers)

View options order results:
result details:
Displaying articles 1-8
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Failure to Replicate: A Sign of Scientific Misconduct?
Publications 2014, 2(3), 71-82; doi:10.3390/publications2030071
Received: 28 February 2014 / Revised: 12 August 2014 / Accepted: 18 August 2014 / Published: 1 September 2014
PDF Full-text (235 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Repeated failures to replicate reported experimental results could indicate scientific misconduct or simply result from unintended error. Experiments performed by one individual involving tritiated thymidine, published in two papers in Radiation Research, showed exponential killing of V79 Chinese hamster cells. Two [...] Read more.
Repeated failures to replicate reported experimental results could indicate scientific misconduct or simply result from unintended error. Experiments performed by one individual involving tritiated thymidine, published in two papers in Radiation Research, showed exponential killing of V79 Chinese hamster cells. Two other members of the same laboratory were unable to replicate the published results in 15 subsequent attempts to do so, finding, instead, at least 100-fold less killing and biphasic survival curves. These replication failures (which could have been anticipated based on earlier radiobiological literature) raise questions regarding the reliability of the two reports. Two unusual numerical patterns appear in the questioned individual’s data, but do not appear in control data sets from the two other laboratory members, even though the two key protocols followed by all three were identical or nearly so. This report emphasizes the importance of: (1) access to raw data that form the background of reports and grant applications; (2) knowledge of the literature in the field; and (3) the application of statistical methods to detect anomalous numerical behaviors in raw data. Furthermore, journals and granting agencies should require that authors report failures to reproduce their published results. Full article
(This article belongs to the Special Issue Misconduct in Scientific Publishing)
Open AccessArticle Measuring Scientific Misconduct—Lessons from Criminology
Publications 2014, 2(3), 61-70; doi:10.3390/publications2030061
Received: 28 February 2014 / Revised: 26 June 2014 / Accepted: 27 June 2014 / Published: 3 July 2014
PDF Full-text (191 KB) | HTML Full-text | XML Full-text
Abstract
This article draws on research traditions and insights from Criminology to elaborate on the problems associated with current practices of measuring scientific misconduct. Analyses of the number of retracted articles are shown to suffer from the fact that the distinct processes of [...] Read more.
This article draws on research traditions and insights from Criminology to elaborate on the problems associated with current practices of measuring scientific misconduct. Analyses of the number of retracted articles are shown to suffer from the fact that the distinct processes of misconduct, detection, punishment, and publication of a retraction notice, all contribute to the number of retractions and, hence, will result in biased estimates. Self-report measures, as well as analyses of retractions, are additionally affected by the absence of a consistent definition of misconduct. This problem of definition is addressed further as stemming from a lack of generally valid definitions both on the level of measuring misconduct and on the level of scientific practice itself. Because science is an innovative and ever-changing endeavor, the meaning of misbehavior is permanently shifting and frequently readdressed and renegotiated within the scientific community. Quantitative approaches (i.e., statistics) alone, thus, are hardly able to accurately portray this dynamic phenomenon. It is argued that more research on the different processes and definitions associated with misconduct and its detection and sanctions is needed. The existing quantitative approaches need to be supported by qualitative research better suited to address and uncover processes of negotiation and definition. Full article
(This article belongs to the Special Issue Misconduct in Scientific Publishing)
Open AccessArticle The Demographics of Deception: What Motivates Authors Who Engage in Misconduct?
Publications 2014, 2(2), 44-50; doi:10.3390/publications2020044
Received: 9 February 2014 / Revised: 20 March 2014 / Accepted: 21 March 2014 / Published: 28 March 2014
PDF Full-text (188 KB) | HTML Full-text | XML Full-text
Abstract
We hypothesized that scientific misconduct (data fabrication or falsification) is goal-directed behavior. This hypothesis predicts that papers retracted for misconduct: are targeted to journals with a high impact factor (IF); are written by authors with additional papers withdrawn for misconduct; diffuse responsibility [...] Read more.
We hypothesized that scientific misconduct (data fabrication or falsification) is goal-directed behavior. This hypothesis predicts that papers retracted for misconduct: are targeted to journals with a high impact factor (IF); are written by authors with additional papers withdrawn for misconduct; diffuse responsibility across many (perhaps innocent) co-authors; and are retracted slower than papers retracted for other infractions. These hypotheses were initially tested and confirmed in a database of 788 papers; here we reevaluate these hypotheses in a larger database of 2,047 English-language papers. Journal IF was higher for papers retracted for misconduct (p < 0.0001). Roughly 57% of papers retracted for misconduct were written by a first author with other retracted papers; 21% of erroneous papers were written by authors with >1 retraction (p < 0.0001). Papers flawed by misconduct diffuse responsibility across more authors (p < 0.0001) and are withdrawn more slowly (p < 0.0001) than papers retracted for other reasons. Papers retracted for unknown reasons are unlike papers retracted for misconduct: they are generally published in journals with low IF; by authors with no other retractions; have fewer authors listed; and are retracted quickly. Papers retracted for unknown reasons appear not to represent a deliberate effort to deceive. Full article
(This article belongs to the Special Issue Misconduct in Scientific Publishing)
Open AccessArticle A Case-Control Comparison of Retracted and Non-Retracted Clinical Trials: Can Retraction Be Predicted?
Publications 2014, 2(1), 27-37; doi:10.3390/publications2010027
Received: 7 September 2013 / Revised: 9 October 2013 / Accepted: 10 October 2013 / Published: 27 January 2014
PDF Full-text (253 KB) | HTML Full-text | XML Full-text
Abstract
Does scientific misconduct severe enough to result in retraction disclose itself with warning signs? We test a hypothesis that variables in the results section of randomized clinical trials (RCTs) are associated with retraction, even without access to raw data. We evaluated all [...] Read more.
Does scientific misconduct severe enough to result in retraction disclose itself with warning signs? We test a hypothesis that variables in the results section of randomized clinical trials (RCTs) are associated with retraction, even without access to raw data. We evaluated all English-language RCTs retracted from the PubMed database prior to 2011. Two controls were selected for each case, matching publication journal, volume, issue, and page as closely as possible. Number of authors, subjects enrolled, patients at risk, and patients treated were tallied in cases and controls. Among case RCTs, 17.5% had ≤2 authors, while 6.3% of control RCTs had ≤2 authors. Logistic regression shows that having few authors is associated with retraction (p < 0.03), although the number of subjects enrolled, patients at risk, or treated patients is not. However, none of the variables singly, nor all of the variables combined, can reliably predict retraction, perhaps because retraction is such a rare event. Exploratory analysis suggests that retraction rate varies by medical field (p < 0.001). Although retraction cannot be predicted on the basis of the variables evaluated, concern is warranted when there are few authors, enrolled subjects, patients at risk, or treated patients. Ironically, these features urge caution in evaluating any RCT, since they identify studies that are statistically weaker. Full article
(This article belongs to the Special Issue Misconduct in Scientific Publishing)
Open AccessArticle A Novel Rubric for Rating the Quality of Retraction Notices
Publications 2014, 2(1), 14-26; doi:10.3390/publications2010014
Received: 21 November 2013 / Revised: 15 January 2014 / Accepted: 20 January 2014 / Published: 24 January 2014
Cited by 2 | PDF Full-text (588 KB) | HTML Full-text | XML Full-text
Abstract
When a scientific article is found to be either fraudulent or erroneous, one course of action available to both the authors and the publisher is to retract said article. Unfortunately, not all retraction notices properly inform the reader of the problems with [...] Read more.
When a scientific article is found to be either fraudulent or erroneous, one course of action available to both the authors and the publisher is to retract said article. Unfortunately, not all retraction notices properly inform the reader of the problems with a retracted article. This study developed a novel rubric for rating and standardizing the quality of retraction notices, and used it to assess the retraction notices of 171 retracted articles from 15 journals. Results suggest the rubric to be a robust, if preliminary, tool. Analysis of the retraction notices suggest that their quality has not improved over the last 50 years, that it varies both between and within journals, and that it is dependent on the field of science, the author of the retraction notice, and the reason for retraction. These results indicate a lack of uniformity in the retraction policies of individual journals and throughout the scientific literature. The rubric presented in this study could be adopted by journals to help standardize the writing of retraction notices. Full article
(This article belongs to the Special Issue Misconduct in Scientific Publishing)
Open AccessArticle Combating Fraud in Medical Research: Research Validation Standards Utilized by the Journal of Surgical Radiology
Publications 2013, 1(3), 140-145; doi:10.3390/publications1030140
Received: 24 October 2013 / Revised: 5 November 2013 / Accepted: 6 November 2013 / Published: 15 November 2013
PDF Full-text (167 KB) | HTML Full-text | XML Full-text
Abstract
Fraud in medical publishing has risen to the national spotlight as manufactured and suspect data have led to retractions of papers in prominent journals. Moral turpitude in medical research has led to the loss of National Institute of Health (NIH) grants, directly [...] Read more.
Fraud in medical publishing has risen to the national spotlight as manufactured and suspect data have led to retractions of papers in prominent journals. Moral turpitude in medical research has led to the loss of National Institute of Health (NIH) grants, directly affected patient care, and has led to severe legal ramifications for some authors. While there are multiple checks and balances in medical research to prevent fraud, the final enforcement lies with journal editors and publishers. There is an ethical and legal obligation to make careful and critical examinations of the medical research published in their journals. Failure to follow the highest standards in medical publishing can lead to legal liability and destroy a journal’s integrity. More significant, however, is the protection of the medical profession’s trust with their colleagues and the public they serve. This article discusses various techniques and tools available to editors and publishers that can help curtail fraud in medical publishing. Full article
(This article belongs to the Special Issue Misconduct in Scientific Publishing)

Review

Jump to: Research

Open AccessReview Editorial Misconduct—Definition, Cases, and Causes
Publications 2014, 2(2), 51-60; doi:10.3390/publications2020051
Received: 17 December 2013 / Revised: 25 February 2014 / Accepted: 28 March 2014 / Published: 4 April 2014
PDF Full-text (176 KB) | HTML Full-text | XML Full-text
Abstract
Though scientific misconduct perpetrated by authors has received much press, little attention has been given to the role of journal editors. This article discusses cases and types of “editorial misconduct”, in which the action or inaction of editorial agents ended in publication [...] Read more.
Though scientific misconduct perpetrated by authors has received much press, little attention has been given to the role of journal editors. This article discusses cases and types of “editorial misconduct”, in which the action or inaction of editorial agents ended in publication of fraudulent work and/or poor or failed retractions of such works, all of which ultimately harm scientific integrity and the integrity of the journals involved. Rare but existent, editorial misconduct ranges in severity and includes deliberate omission or ignorance of peer review, insufficient guidelines for authors, weak or disingenuous retraction notices, and refusal to retract. The factors responsible for editorial misconduct and the options to address these are discussed. Full article
(This article belongs to the Special Issue Misconduct in Scientific Publishing)
Open AccessReview Research Misconduct—Definitions, Manifestations and Extent
Publications 2013, 1(3), 87-98; doi:10.3390/publications1030087
Received: 26 August 2013 / Revised: 26 September 2013 / Accepted: 30 September 2013 / Published: 11 October 2013
Cited by 2 | PDF Full-text (185 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, the international scientific community has been rocked by a number of serious cases of research misconduct. In one of these, Woo Suk Hwang, a Korean stem cell researcher published two articles on research with ground-breaking results in Science in [...] Read more.
In recent years, the international scientific community has been rocked by a number of serious cases of research misconduct. In one of these, Woo Suk Hwang, a Korean stem cell researcher published two articles on research with ground-breaking results in Science in 2004 and 2005. Both articles were later revealed to be fakes. This paper provides an overview of what research misconduct is generally understood to be, its manifestations and the extent to which they are thought to exist. Full article
(This article belongs to the Special Issue Misconduct in Scientific Publishing)

Journal Contact

MDPI AG
Publications Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
publications@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Publications
Back to Top