Rigor and reproducibility for data analysis and design in the behavioral sciences

Tom Hildebrandt, Jason M. Prenoveau

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

The rigor and reproducibility of science methods depends heavily on the appropriate use of statistical methods to answer research questions and make meaningful and accurate inferences based on data. The increasing analytic complexity and valuation of novel statistical and methodological approaches to data place greater emphasis on statistical review. We will outline the controversies within statistical sciences that threaten rigor and reproducibility of research published in the behavioral sciences and discuss ongoing approaches to generate reliable and valid inferences from data. We outline nine major areas to consider for generally evaluating the rigor and reproducibility of published articles and apply this framework to the 116 Behaviour Research and Therapy (BRAT) articles published in 2018. The results of our analysis highlight a pattern of missing rigor and reproducibility elements, especially pre-registration of study hypotheses, links to statistical code/output, and explicit archiving or sharing data used in analyses. We recommend reviewers consider these elements in their peer review and that journals consider publishing results of these rigor and reproducibility ratings with manuscripts to incentivize authors to publish these elements with their manuscript.

Original languageEnglish
Article number103552
JournalBehaviour Research and Therapy
Volume126
DOIs
StatePublished - Mar 2020

Keywords

  • Big data
  • P-hacking
  • Reliability
  • Reproducibility
  • Statistics

Fingerprint

Dive into the research topics of 'Rigor and reproducibility for data analysis and design in the behavioral sciences'. Together they form a unique fingerprint.

Cite this