Catching errors in scientific publishing.
At Inscopix we value the importance of scientific integrity and transparency. Scientific research is expected to be reliable, trustworthy, and to provide value to society, so that the expenditures supporting them may be justified. However, in recent times, because of scarce funding and the publish-or-perish culture, principles like integrity and transparency can be threatened by scientific misconduct. When many scientific publications are retracted due to scientific misconduct, data falsification, and plagiarism (4449 publications retracted from 1928-2011)1, some from highly respected journals, this becomes very disconcerting! While generally the community trusts that publications in peer-reviewed journals are accurate and honestly reported, papers are considered the currency of science, as they help researchers get grants, promotion, and tenure, as well as communicate their findings so that others may benefit from their work. Just like with currency, papers can be falsified for personal gain.
Data irreproducibility in particular is one of the major concerns in maintaining high standards in scientific publishing. At the moment, scientists are rarely incentivized to provide evidence of the quality of their data for publications, so it can be difficult to consistently catch honest errors or blatant falsification. Proper measures should be taken to ensure that the credibility of scientific research is well vested before we waste time trying to reproduce someone else’s bad data, and potentially further degrade the already fragile public trust in science.
Journals like Nature have already taken a step to tackle this problem; to ensure availability of all the information relevant to the reproducibility of an experiment, both technical and statistical, Nature has introduced an 18-point checklist as part of the journal reporting guidelines. Further highlighting this trend towards transparency are two articles 2, 3 in the recent issue of Science (June, 2015), which set several standards that will help readily recognize “transparency, openness, and reproducibility” that can be otherwise overlooked in the peer review and publication process.
These standards were developed by the Transparency and Openness Promotion (TOP) Committee, comprising of journal editors, representatives from funding agencies, and researchers, to hopefully provide some degree of reform to the current system of review and publication. The eight standards are modular, and so can be adopted either in whole or in part, and have three levels of transparency in each category, with an increasing standard of stringency from Level 1 to Level 3 (Chart 1). A separate level (Level 0) is included that does not meet the standards, and in some cases may even represent current publication practices.
For example, under the guideline “Research Materials Transparency”, Level 1 has journals require articles state if and where materials are available for replicating reported procedures. Level 2 requires journals to publish papers only if the materials used in the study are clearly listed so that all published results can be precisely reproduced. Data repositories like DRYAD serves this purpose and increases the accessibility and visibility of published data. Level 3 requires that an independent group verify the results based on the information provided, before being accepted for publication. The paper will be considered unfit for publication if the results are not replicable.
Chart 1. List of some standards and levels of the TOP guidelines. Levels 1 to 3 are increasingly stringent for each standard. Level 0 offers a comparison that does not meet the standard (Adapted from Nosek et al., 2015).
Although it may seem Level 1 has minimal effects, it is still likely to have an impact on transparency and reproducibility. The ultimate goal is to benefit the scientific community by improving the current standards for reporting and accessibility of research-related data. Now, more than 100 journals and 30 organizations have expressed their support for these guidelines and principles, and have “committed to conducting a review within a year of the standards for potential adoption”.
Another recent example of a concerted effort to bring reform in scientific publishing is a collaboration between the Science Exchange and the Center for Open Science called the Reproducibility Project: Cancer Biology, which will address experimental reproducibility4. The group will independently replicate a subset of experimental results from 50 high-impact cancer biology studies published between 2010-2012. This has garnered both praise and criticism, and only time will tell how effective this endeavor will turn out to be, with the results estimated to be released by the end of 2017.
Transparency, openness, and reproducibility of scientific studies is at stake, and these current guidelines can help ensure their reinstatement. The scientific community must value the importance of these standards and adopt open practices to avoid errors and retractions. We hope that this dialogue will help to evolve the scientific communication process and eventually protect experimental integrity and public trust in science!
Grieneisen and Zhang (2012). A Comprehensive Survey of Retracted Articles from the Scholarly Literature. PLOS One. 7 (10) e44118
Alberts et al., (2015). SCIENTIFIC INTEGRITY: Self-correction in science at work. Science. 348 (6242) 1420-1422
Nosek at al., (2015). SCIENTIFIC STANDARDS: Promoting an open research culture. Science. 348 (6242) 1422-1425
Kaiser (2015). The cancer test. Science. 348 (6242) 1411-1413