New APS replication initiative aims to open the file drawer, heralding a positive step for psychological science.

During the past couple of years, psychological science has been in the midst of a PR disaster. Academics have publicly announced that they failed to replicate some of the most classic findings in our field, bringing the original effects themselves — and often the integrity of the original researchers reporting them — into question. These pronouncements and subsequent push to estimate the true effect sizes of various findings led to the even more disturbing realization that it is far too difficult to publish these failed replications — or successful replications, for that matter — in the peer-reviewed, academic journals that serve as our bread and butter.

A new initiative, backed by the Association for Psychological Science and co-headed by Dr. Daniel Simons of the University of Illinois at Urbana-Champaign and Dr. Alex Holcombe of the University of Sydney, is ripping the dirty laundry  that’s been airing in public for the past two years off of the clothesline and finally giving it the good, thorough cleaning that it has so desperately needed. This initiative aims to make rigorous replication a rewarding and beneficial aspect of a productive scientific career by establishing a special section dedicated to publishing replications in one of the top journals in our field, Perspectives on Psychological Science (one of the official journals of the Association for Psychological Science).

Psychologists have been calling for a widespread replication effort for years. However, there are several good reasons why this initiative is the first that truly has the gleaming possibility of revitalizing our field.

  1. It helps that the initiative is strongly backed by our field’s premiere organization. Gaining the support of APS is a critical win for the replication initiative, and it will make it much more appealing for researchers to want to take part in this project. A publication in Perspectives is certainly nothing to scoff at, and given that previous researchers who performed replication attempts could only hope to publish their findings on a blog or personal website, offering a publication in Perspectives is more than a step forward — it’s a giant leap.
  2. Reviewers will be peer-reviewing the studies before data are collected. This is a critical switch in the typical peer-review process, in which peer reviewers only read a manuscript once data collection is completed, analyzed, and presented in a tidy package. This addresses several important concerns that have been raised in the recent past about what exactly it means to have an effect that successfully “replicates.” As Gregory Francis pointed out in a recent Perspectives article of his own, the effect sizes that we study are often so small that one must necessarily expect a certain number of failed replications to arise within a given number of studies. Yet, all too often, the “six-study packages” presented in journals include absolutely no null effects. Although Francis notes that this could potentially arise from running experiments or analyzing data improperly, this lack of published null effects is likely stronger evidence of a “file drawer problem,” meaning null or negative results are suppressed and hidden, never to see the light of publication. Journals are incredibly hesitant to publish null results or failures to replicate, opting to focus instead on publishing novel, significant, and (ideally) flashy results. By approving the research design a priori rather than waiting to approve the results after the data have been collected, replications will now be guaranteed their spot in the publication record for public consumption and archival in the scientific literature — even if they fail.
  3. The Registered Replication Reports will hopefully serve as a comprehensive repository where researchers can find rigorous replications of findings of interest and quickly identify the true effect sizes of the phenomena they wish to study. By accumulating a series of replications across multiple labs, this initiative will provide a stable, robust indicator of the “true” effect size of a finding, rather than forcing future researchers to formulate research programs and design future studies by relying on an effect size determined by a single, often-underpowered study.
  4. The initiative is not Pass/Fail. Recent attempts to replicate findings have resulted in a dichotomous judgment of “Replicated” or “Failed to Replicate,” which can often trigger raw emotions and an (often justified) defensiveness on the part of original researchers whose findings have been deemed “non-replicable.” This project will be focusing on estimating cumulative effect sizes across multiple labs, rather than simply attempting to determine whether every single finding can be awarded a “Pass” or a “Fail” based on a single replication attempt. Not only does this focus on cumulative effect sizes address many of the shortcomings of traditional null-hypothesis significance testing, this shift in focus has the potential to attenuate a lot of the animosity that has plagued recent “clean-up” efforts within the field. It is much more reasonable to expect a researcher to respond in a favorable, productive manner to a multi-lab analysis indicating that a certain finding’s effect size may be smaller than originally estimated, as opposed to a single study that subtly (or not so subtly) implies that the original finding is somehow fake.
  5. It is not a witch hunt. As mentioned in the last point, many replication efforts have focused on targeting certain findings and seeking to clarify if they are somehow “real” or “not real.” In contrast, the focus of the Registered Replication Reports is not to dismantle existing findings. Rather, it is intended to be an objective indicator of true effect sizes. This emphasis will foster a much more civilized discussion within the field by removing the sense that certain labs or findings are being “targeted.” Ideally, finding a true, robust effect size for a wide variety of findings (both controversial and not) will make it harder for certain lines of research to be discredited, and will make it easier for future researchers to build productive research programs based on existing literature.

For more information on this new APS initiative, see the official press release here. All of the Registered Replication Reports will be open access and free to all interested viewers without a subscription to Perspectives.


ResearchBlogging.org

Francis, G. (2012). The Psychology of Replication and Replication in Psychology Perspectives on Psychological Science, 7 (6), 585-594 DOI: 10.1177/1745691612459520

Rosenthal, R. (1979). An introduction to the file drawer problem. Psychological Bulletin, 86, 638-641

Pashler, H., & Wagenmakers, E. (2012). Editors’ Introduction to the Special Section on Replicability in Psychological Science: A Crisis of Confidence? Perspectives on Psychological Science, 7 (6), 528-530 DOI: 10.1177/1745691612465253

Wicherts, J., Borsboom, D., Kats, J., & Molenaar, D. (2006). The poor availability of psychological research data for reanalysis. American Psychologist, 61 (7), 726-728 DOI: 10.1037/0003-066X.61.7.726

One response to “New APS replication initiative aims to open the file drawer, heralding a positive step for psychological science.

  1. Pingback: Articles I Like | Job van Wolferen

Leave a comment