Six Principles and Six Summer Readings

I helped contribute a short piece for a divisional newsletter about methodological reform issues.  I did this with three much smarter colleagues (Lucas, Fraley, and Roisman) and this is something I highly recommend.  However, I take all responsibility for the ideas in this post.

Anyways, this turned out to be an interesting chance to write about methodological reform issues and provide a “Summer Reading” list of current pieces. We tried to take a “friendly” approach by laying out the issues so individual researchers could read more and make  informed decisions on their own.  Here is a link to the ReformPrimer in draft form.  I posted the greatest hits below.

Six Principles and Practices to Consider Adopting in Your Own Work

1. Commit to Total Scientific Honesty.  See Lykken (1991) and Feynman (1985).

2. Be Aware of the Impact of Researcher Degrees of Freedom.  See Simmons et al. (2011).

3. Focus on Effect Size Estimation Rather than Statistical Significance.  See Cumming (2012) or Fraley and Marks (2007) or Kline (2013).

4. Understand that Sample Size and Statistical Power Matter.  See Cohen (1962) and Ioannidis (2005) and well a whole bunch of stuff like Francis (2013) and Schimmack (2012).

5. Review Papers for Methodological Completeness, Accuracy, and Plausibility.  See, for example, Kashy et al. (2009).  Sometimes effect sizes can just be too large, you know.  Standard errors are not the same thing as standard deviations…

6. Focus on Reproducibility and Replication. See, for example, Asendorpf et al. (2013).

Six Recent Readings

These are the perfect readings to take to the beach or local pool.  Who needs a thriller loosely tied to Dante’s Inferno or one of the books in the Song of Ice and Fire?

Asendorpf, J. B., Conner, M., De Fruyt, F., De Houwer, J., Denisseen, J. J. A., Fielder, K. et al. (2013). Recommendations for increasing replicability in psychology. European Journal of Personality, 27, 108-119. DOI: 10.1002/per.1919  [Blame me for the horrible title of the Lucas and Donnellan comment.]

John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth-telling. Psychological Science, 23, 524–532. DOI: 10.1177/0956797611430953

 LeBel, E. & Peters, K. R. (2011). Fearing the future of empirical psychology: Bem’s (2011) evidence of psi as a case study of deficiencies in model research practice. Review of General Psychology, 15, 371-379. DOI: 10.1037/a0025172

Pashler, H., & Wagenmakers, E-J. (2012). Editors’ introduction to the special section on replicability in psychological science: A crisis of confidence? Perspectives on Psychological Science, 7, 528-530. DOI: 10.1177/1745691612465253  [The whole issue is worth reading but we highlighted the opening piece.]

Schimmack, U. (2012). The ironic effect of significant results on the credibility of multiple-study articles. Psychological Methods, 17, 551-566. DOI: 10.1037/a0029487

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 11, 1359-1366. DOI: 10.1177/0956797611417632