This tutorial was presented at the University of Melbourne, September 29, 2020.
The repository can be found at: https://github.com/dcosme/specification-curves/
There are many different ways to test a given association and we usually only report one or a few model specifications.
Model selection relies on choices by the researcher, and these choices are often arbitrary and sometimes driven by a desire for significant results.
Given the same dataset, two researchers might choose to answer the same question in very different ways.
Figure from Simonsohn, Simmons, & Nelson, 2020
According to Simonsohn, Simmons, & Nelson, 2020), the solution is to specify all “reasonable” models to test an association and assess the joint distribution across model specifications.
Some researchers object to blindly running alternative specifications that may make little sense for theoretical or statistical reasons just for the sake of ‘robustness’. We are among those researchers. We believe one should test specifications that vary in as many of the potentially ad hoc assumptions as possible without testing any specifications that are not theoretically grounded.
This can be thought of as an explicit framework for sensitivity analyses / robustness checks, that enables inferential statistics across model specifications.
A better understanding of how conceptual and analytic decisions alter the association of interest.
A more robust scientific literature with increased generalizability and translational value.