14.04.2025

Literaturliste

Wissenschaft und Bedeutung

Anleitungen

Methodenartefakte/-Verzerrungen

Hypothesentesten

Power/Teststärke

  • Cohen, J. (1962). Statistical Power Of Abnormal-Social Psychological-Research - A Review. Journal of Abnormal Psychology, 65(3), 145-153. doi: 10.1037/h0045186
  • Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149-1160. doi: 10.3738/BRM.41.4.1149
  • Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175-191.
  • Hedges, L. V., & Pigott, T. D. (2001). The power of statistical tests in meta-analysis. Psychological Methods, 6(3), 203-217. doi: 10.1037/1082-989X.6.3.203
  • Kenny, D. A., & Judd, C. M. (2013). Power Anomalies in Testing Mediation. Psychological Science. doi: 10.1177/0956797613502676
  • Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: Causes, consequences, and remedies. Psychological Methods, 9(2), 147-163. doi: 10.1037/1082-989x.9.2.147
  • Maxwell, S. E., Kelley, K., & Rausch, J. R. (2008). Sample size planning for statistical power and accuracy in parameter estimation. Annual Review of Psychology, 59, 537-563. doi: 10.1146/annurev.psych.59.103006.093735
  • O'Keefe, D. J. (2007). Brief Report: Post Hoc Power, Observed Power, A Priori Power, Retrospective Power, Prospective Power, Achieved Power: Sorting Out Appropriate Uses of Statistical Power Analyses. Communication Methods and Measures, 1(4), 291-299. doi: 10.1080/19312450701641375
  • Onwuegbuzie, A. J., & Leech, N. L. (2004). Post Hoc Power: A Concept Whose Time Has Come. Understanding Statistics, 3(4), 201-230. doi: 10.1207/s15328031us0304_1
  • Perugini, M., Gallucci, M., & Costantini, G. (2014). Safeguard Power as a Protection Against Imprecise Power Estimates. Perspectives on Psychological Science, 9(3), 319-332. doi: 10.1177/1745691614528519
  • Schimmack, U. (2012). The Ironic Effect of Significant Results on the Credibility of Multiple-Study Articles. Psychological Methods, 17(4), 551-566. doi: 10.1037/a0029487
  • Sedlmeier, P., & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies? Psychological Bulletin, 105, 309-316. doi: 10.1037/0033-2909.105.2.309

Beispiele

  • Cafri, G., Kromrey, J. D., & Brannick, M. T. (2010). A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology. Multivariate Behavioral Research, 45(2), 239-270. doi: 10.1080/00273171003680187
  • Connelly, B. L., Ireland, R. D., Reutzel, C. R., & Coombs, J. E. (2010). The Power and Effects of Entrepreneurship Research. Entrepreneurship Theory and Practice, 34(1), 131-149.
  • Dyba, T., Kampenes, V. B., & Sjoberg, D. I. K. (2006). A systematic review of statistical power in software engineering experiments. Information and Software Technology, 48(8), 745-755. doi: 10.1016/j.infsof.2005.08.009
  • Marszalek, J. M., Barber, C., Kohlhart, J., & Holmes, C. B. (2011). Sample Size in Psychological Research Over the Past 30 Years. Perceptual and Motor Skills, 112(2), 331-348. doi: 10.2466/03.11.pms.112.2.331-348
  • Shen, W. N., Kiger, T. B., Davies, S. E., Rasch, R. L., Simon, K. M., & Ones, D. S. (2011). Samples in Applied Psychology: Over a Decade of Research in Review. Journal of Applied Psychology, 96(5), 1055-1064. doi: 10.1037/a0023322
  • Taborsky, M. (2010). Sample Size in the Study of Behaviour. Ethology, 116(3), 185-202. doi: 10.1111/j.1439-0310.2010.01751.x

Beurteilung von Effekten

  • Cumming, G., & Fidler, F. (2009). Confidence Intervals Better Answers to Better Questions. Zeitschrift Fur Psychologie-Journal of Psychology, 217(1), 15-26. doi: 10.1027/0044-3409.217.1.15
  • Fritz, C. O., Morris, P. E., & Richler, J. J. (2012). Effect size estimates: Current use, calculations, and interpretation. Journal of Experimental Psychology: General, 141(1), 2-18. doi: 10.1037/a0024338
  • Holender, D. (2008). Use of confidence intervals instead of tests of significance: epistemological and practical aspects. Psychologie Du Travail Et Des Organisations, 14(1), 9-42.
  • Kelley, K., & Preacher, K. J. (2012). On effect size. Psychological Methods, 17(2), 137-152. doi: 10.1037/a0028086
  • Prentice, D. A., & Miller, D. T. (1992). When small effects are impressive. Psychological Bulletin, 112, 160-164.
  • Sun, S., Pan, W., & Wang, L. L. (2010). A comprehensive review of effect size reporting and interpreting practices in academic journals in education and psychology. Journal of Educational Psychology, 102(4), 989-1004. doi: 10.1037/a0019507
  • Wang, J. (2008). Effect Size and Practical Importance: A Non-Monotonic Match. International Journal of Research & Method in Education, 31(2), 125-132.

Wissenschaftliches Fehlverhalten

  • Brookes, P. S. (2014). Internet publicity of data problems in the bioscience literature correlates with enhanced corrective action. PeerJ, 2, e313. doi: 10.7717/peerj.313
  • Budd, J. M., Sievert, M., & Schultz, T. R. (1998). Phenomena of retraction: Reasons for retraction and citations to the publications. JAMA: Journal of the American Medical Association, 280(3), 296-297.
  • Farthing, M. J. G. (2014). Research misconduct: A grand global challenge for the 21st Century. Journal of Gastroenterology and Hepatology, 29(3), 422-427. doi: 10.1111/jgh.12500
  • Fiedler, K., Kutzner, F., & Krueger, J. I. (2012). The Long Way From alpha-Error Control to Validity Proper: Problems With a Short-Sighted False-Positive Debate. Perspectives on Psychological Science, 7(6), 661-669. doi: 10.1177/1745691612462587
  • Grieneisen, M. L., & Zhang, M. (2012). A Comprehensive Survey of Retracted Articles from the Scholarly Literature. PLoS ONE, 7(10), e44118. doi: 10.1371/journal.pone.0044118
  • Ioannidis, J. P. A. (2012). Why Science Is Not Necessarily Self-Correcting. Perspectives on Psychological Science, 7(6), 645-654. doi: 10.1177/1745691612464056
  • John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524-532. doi: 10.1177/0956797611430953
  • Martinson, B. C., Anderson, M. S., & de Vries, R. (2005). Scientists behaving badly. Nature, 435(7043), 737-738.
  • Murayama, K., Pekrun, R., & Fiedler, K. (2014). Research Practices That Can Prevent an Inflation of False-Positive Rates. Personality and Social Psychology Review, 18(2), 107-118. doi: 10.1177/1088868313496330
  • Steen, R. G., Casadevall, A., & Fang, F. C. (2013). Why Has the Number of Scientific Retractions Increased? PLoS ONE, 8(7). doi: 10.1371/journal.pone.0068397
  • Redman, B. K., Yarandi, H. N., & Merz, J. F. (2008). Empirical developments in retraction. Journal of Medical Ethics: Journal of the Institute of Medical Ethics, 34(11), 807-809.

Wissenschafts-/Publikationspraxis

  • Asendorpf, J. B., Conner, M., De Fruyt, F., De Houwer, J., Denissen, J. J. A., Fiedler, K., . . . Wicherts, J. M. (2013). Replication is More than Hitting the Lottery Twice. European Journal of Personality, 27(2), 138-144.
  • Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The Rules of the Game Called Psychological Science. Perspectives on Psychological Science, 7(6), 543-554. doi: 10.1177/1745691612459060
  • Bakker, M., & Wicherts, J. M. (2011). The (mis)reporting of statistical results in psychology journals. Behavior Research Methods, 43(3), 666-678. doi: 10.3758/s13428-011-0089-5
  • Brand, A., Bradley, M. T., Best, L. A., & Stoica, G. (2008). Accuracy of effect size estimates from published psychological research. Perceptual and Motor Skills, 106(2), 645-649. doi: 10.2466/pms.106.2.645-649
  • Buela-Casal, G. (2014). Pathological publishing: A new psychological disorder with legal consequences? European Journal of Psychology Applied to Legal Context, 6(2), 91-97. doi: 10.1016/j.ejpal.2014.06.005
  • Harris, A., Reeder, R., & Hyun, J. (2011). Survey of Editors and Reviewers of High-Impact Psychology Journals: Statistical and Research Design Problems in Submitted Manuscripts. Journal of Psychology, 145(3), 195-209. doi: 10.1080/00223980.2011.555431
  • Ioannidis, J. P. A., Munafo, M. R., Fusar-Poli, P., Nosek, B. A., & David, S. P. (2014). Publication and other reporting biases in cognitive sciences: detection, prevalence, and prevention. Trends in Cognitive Sciences, 18(5), 235-241. doi: 10.1016/j.tics.2014.02.010
  • Ledgerwood, A., & Sherman, J. W. (2012). Short, Sweet, and Problematic? The Rise of the Short Report in Psychological Science. Perspectives on Psychological Science, 7(1), 60-66. doi: 10.1177/1745691611427304
  • Makel, M. C. (2014). The Empirical March: Making Science Better at Self-Correction. Psychology of Aesthetics Creativity and the Arts, 8(1), 2-7. doi: 10.1037/a0035803
  • Maner, J. K. (2014). Let’s Put Our Money Where Our Mouth Is: If Authors Are to Change Their Ways, Reviewers (and Editors) Must Change With Them. Perspectives on Psychological Science, 9(3), 343-351. doi: 10.1177/1745691614528215
  • Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific Utopia: II. Restructuring Incentives and Practices to Promote Truth Over Publishability. Perspectives on Psychological Science, 7(6), 615-631. doi: 10.1177/1745691612459058