Literaturliste
Wissenschaft und Bedeutung
- Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2-3), 61-83. doi: 10.1017/S0140525X0999152X
- Howard, G. S., Lau, M. Y., Maxwell, S. E., Venter, A., Lundy, R., & Sweeny, R. M. (2009). Do research literatures give correct answers? Review of General Psychology, 13(2), 116-121. doi: 10.1037/a0015468
- Ioannidis, J. P. A. (2008). Why most discovered true associations are inflated. Epidemiology, 19(5), 640-648. doi: 10.1097/EDE.0b013e31818131e7
- Jahoda, M., Lazarsfeld, P. F., & Zeisel, H. (1933). Die Arbeitslosen von Marienthal. Ein soziographischer Versuch über die Wirkungen langandauernder Arbeitslosigkeit. Leipzig: Hirzel.
- Rosnow, R., & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44(10), 1276-1284.
Anleitungen
- Aguinis, H., & Vandenberg, R. J. (2014). An Ounce of Prevention Is Worth a Pound of Cure: Improving Research Quality Before Data Collection. Annual Review of Organizational Psychology and Organizational Behavior, Vol 1, 1, 569-595. doi: 10.1146/annurev-orgpsych-031413-091231
- Asendorpf, J. B., Conner, M., De Fruyt, F., De Houwer, J., Denissen, J. J. A., Fiedler, K., . . . Wicherts, J. M. (2013). Recommendations for Increasing Replicability in Psychology. European Journal of Personality, 27(2), 108-119. doi: 10.1002/per.1919
- Funder, D. C., Levine, J. M., Mackie, D. M., Morf, C. C., Sansone, C., Vazire, S., & West, S. G. (2014). Improving the Dependability of Research in Personality and Social Psychology: Recommendations for Research and Educational Practice. Personality and Social Psychology Review, 18(1), 3-12. doi: 10.1177/1088868313507536
- Biemann, T. (2013). What If We Were Texas Sharpshooters? Predictor Reporting Bias in Regression Analysis. Organizational Research Methods, 16(3), 335-363. doi: 10.1177/1094428113485135
- Francis, G. (2012). Publication bias and the failure of replication in experimental psychology. Psychonomic Bulletin & Review, 19(6), 975-991. doi: 10.3758/s13423-012-0322-y
- Garcia-Perez, M. A. (2012). Statistical conclusion validity: some common threats and simple remedies. Frontiers in Psychology, 3. doi: 10.3389/fpsyg.2012.00325
- Wagenmakers, E. J., Wetzels, R., Borsboom, D., van der Maas, H. L. J., & Kievit, R. A. (2012). An Agenda for Purely Confirmatory Research. Perspectives on Psychological Science, 7(6), 632-638. doi: 10.1177/1745691612463078
Methodenartefakte/-Verzerrungen
- Friedman, H. H., & Amoo, T. (1999). Rating the Rating Scales. Journal of Marketing Management (10711988), 9(3), 114-123.
- Gaines, B. J., Kuklinski, J. H., & Quirk, P. J. (2007). The logic of the survey experiment reexamined. Political Analysis, 15(1), 1-20. doi: 10.1093/pan/mpl008
- Krebs, D., & Hoffmeyer-Zlotnik, J. H. P. (2010). Positive First or Negative First? Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 6(3), 118-127. doi: 10.1027/1614-2241/a000013
- Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17(3), 437-455. doi: 10.1037/a0028085
- Ostroff, C., Kinicki, A. J., & Clark, M. A. (2002). Substantive and operational issues of response bias across levels of analysis: An example of climate-satisfaction relationships. Journal of Applied Psychology, 87(2), 355-368. doi: 10.1037/0021-9010.87.2.355
- Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879-903. doi: 10.1037/0021-9010.88.5.879
- Ratcliff, R. (1993). Methods for Dealing with Reaction-Time Outliers. Psychological Bulletin, 114(3), 510-532. doi: 10.1037/0033-2909.114.3.510
- Ray, J. J. (1990). Acquiescence and Problems with Forced-Choice Scales. The Journal of Social Psychology, 130(3), 397-399. doi: 10.1080/00224545.1990.9924595
- Shaffer, J. P. (1995). Multiple hypothesis testing. Annual Review of Psychology, 46, 561-584. doi: 10.1146/annurev.ps.46.020195.003021
- MacKinnon, D. P., Krull, J. L., & Lockwood, C. M. (2000). Equivalence of the Mediation, Confounding and Suppression Effect. Prevention Research, 1(4), 173-186.
- Weijters, B., Cabooter, E., & Schillewaert, N. (2010). The effect of rating scale format on response styles: The number of response categories and response category labels. International Journal of Research in Marketing, 27(3), 236-247. doi: http://dx.doi.org/10.1016/j.ijresmar.2010.02.004
Hypothesentesten
- Cumming, G. (2008). Replication and p Intervals p Values Predict the Future Only Vaguely, but Confidence Intervals Do Much Better. Perspectives on Psychological Science, 3(4), 286-300. doi: 10.1111/j.1745-6924.2008.00079.x
- Curran, P. J., & Hussong, A. M. (2009). Integrative Data Analysis: The Simultaneous Analysis of Multiple Data Sets. Psychological Methods, 14(2), 81-100. doi: 10.1037/a0015914
- de Schoot, V. R., Meeus, W., & Medimond. (2011). How to move beyond classical null hypothesis testing: A black bear story. 15th European Conference on Developmental Psychology, 9-16.
- Francis, G. (2013). Replication, statistical consistency, and publication bias. Journal of Mathematical Psychology, 57(5), 153-169. doi: 10.1016/j.jmp.2013.02.003
- Lambdin, C. (2012). Significance tests as sorcery: Science is empirical-significance tests are not. Theory & Psychology, 22(1), 67-90. doi: 10.1177/0959354311429854
- Leggett, N. C., Thomas, N. A., Loetscher, T., & Nicholls, M. E. R. (2013). The life of p: "Just significant" results are on the rise. Quarterly Journal of Experimental Psychology, 66(12), 2303-2309. doi: 10.1080/17470218.2013.863371
- Keselman, H. J., Miller, C. W., & Holland, B. (2011). Many Tests of Significance: New Methods for Controlling Type I Errors. Psychological Methods, 16(4), 420-431. doi: 10.1037/a0025810
- Shaffer, J. P. (1995). Multiple hypothesis testing. Annual Review of Psychology, 46, 561-584. doi: 10.1146/annurev.ps.46.020195.003021
- Sedlmeier, P. (2009). Beyond the Significance Test Ritual What Is There? Zeitschrift Fur Psychologie-Journal of Psychology, 217(1), 1-5. doi: 10.1027/0044-3409.217.1.1
Power/Teststärke
- Cohen, J. (1962). Statistical Power Of Abnormal-Social Psychological-Research - A Review. Journal of Abnormal Psychology, 65(3), 145-153. doi: 10.1037/h0045186
- Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149-1160. doi: 10.3738/BRM.41.4.1149
- Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175-191.
- Hedges, L. V., & Pigott, T. D. (2001). The power of statistical tests in meta-analysis. Psychological Methods, 6(3), 203-217. doi: 10.1037/1082-989X.6.3.203
- Kenny, D. A., & Judd, C. M. (2013). Power Anomalies in Testing Mediation. Psychological Science. doi: 10.1177/0956797613502676
- Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: Causes, consequences, and remedies. Psychological Methods, 9(2), 147-163. doi: 10.1037/1082-989x.9.2.147
- Maxwell, S. E., Kelley, K., & Rausch, J. R. (2008). Sample size planning for statistical power and accuracy in parameter estimation. Annual Review of Psychology, 59, 537-563. doi: 10.1146/annurev.psych.59.103006.093735
- O'Keefe, D. J. (2007). Brief Report: Post Hoc Power, Observed Power, A Priori Power, Retrospective Power, Prospective Power, Achieved Power: Sorting Out Appropriate Uses of Statistical Power Analyses. Communication Methods and Measures, 1(4), 291-299. doi: 10.1080/19312450701641375
- Onwuegbuzie, A. J., & Leech, N. L. (2004). Post Hoc Power: A Concept Whose Time Has Come. Understanding Statistics, 3(4), 201-230. doi: 10.1207/s15328031us0304_1
- Perugini, M., Gallucci, M., & Costantini, G. (2014). Safeguard Power as a Protection Against Imprecise Power Estimates. Perspectives on Psychological Science, 9(3), 319-332. doi: 10.1177/1745691614528519
- Schimmack, U. (2012). The Ironic Effect of Significant Results on the Credibility of Multiple-Study Articles. Psychological Methods, 17(4), 551-566. doi: 10.1037/a0029487
- Sedlmeier, P., & Gigerenzer, G. (1989). Do studies of statistical power have an effect on the power of studies? Psychological Bulletin, 105, 309-316. doi: 10.1037/0033-2909.105.2.309
Beispiele
- Cafri, G., Kromrey, J. D., & Brannick, M. T. (2010). A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology. Multivariate Behavioral Research, 45(2), 239-270. doi: 10.1080/00273171003680187
- Connelly, B. L., Ireland, R. D., Reutzel, C. R., & Coombs, J. E. (2010). The Power and Effects of Entrepreneurship Research. Entrepreneurship Theory and Practice, 34(1), 131-149.
- Dyba, T., Kampenes, V. B., & Sjoberg, D. I. K. (2006). A systematic review of statistical power in software engineering experiments. Information and Software Technology, 48(8), 745-755. doi: 10.1016/j.infsof.2005.08.009
- Marszalek, J. M., Barber, C., Kohlhart, J., & Holmes, C. B. (2011). Sample Size in Psychological Research Over the Past 30 Years. Perceptual and Motor Skills, 112(2), 331-348. doi: 10.2466/03.11.pms.112.2.331-348
- Shen, W. N., Kiger, T. B., Davies, S. E., Rasch, R. L., Simon, K. M., & Ones, D. S. (2011). Samples in Applied Psychology: Over a Decade of Research in Review. Journal of Applied Psychology, 96(5), 1055-1064. doi: 10.1037/a0023322
- Taborsky, M. (2010). Sample Size in the Study of Behaviour. Ethology, 116(3), 185-202. doi: 10.1111/j.1439-0310.2010.01751.x
Beurteilung von Effekten
- Cumming, G., & Fidler, F. (2009). Confidence Intervals Better Answers to Better Questions. Zeitschrift Fur Psychologie-Journal of Psychology, 217(1), 15-26. doi: 10.1027/0044-3409.217.1.15
- Fritz, C. O., Morris, P. E., & Richler, J. J. (2012). Effect size estimates: Current use, calculations, and interpretation. Journal of Experimental Psychology: General, 141(1), 2-18. doi: 10.1037/a0024338
- Holender, D. (2008). Use of confidence intervals instead of tests of significance: epistemological and practical aspects. Psychologie Du Travail Et Des Organisations, 14(1), 9-42.
- Kelley, K., & Preacher, K. J. (2012). On effect size. Psychological Methods, 17(2), 137-152. doi: 10.1037/a0028086
- Prentice, D. A., & Miller, D. T. (1992). When small effects are impressive. Psychological Bulletin, 112, 160-164.
- Sun, S., Pan, W., & Wang, L. L. (2010). A comprehensive review of effect size reporting and interpreting practices in academic journals in education and psychology. Journal of Educational Psychology, 102(4), 989-1004. doi: 10.1037/a0019507
- Wang, J. (2008). Effect Size and Practical Importance: A Non-Monotonic Match. International Journal of Research & Method in Education, 31(2), 125-132.
Wissenschaftliches Fehlverhalten
- Brookes, P. S. (2014). Internet publicity of data problems in the bioscience literature correlates with enhanced corrective action. PeerJ, 2, e313. doi: 10.7717/peerj.313
- Budd, J. M., Sievert, M., & Schultz, T. R. (1998). Phenomena of retraction: Reasons for retraction and citations to the publications. JAMA: Journal of the American Medical Association, 280(3), 296-297.
- Farthing, M. J. G. (2014). Research misconduct: A grand global challenge for the 21st Century. Journal of Gastroenterology and Hepatology, 29(3), 422-427. doi: 10.1111/jgh.12500
- Fiedler, K., Kutzner, F., & Krueger, J. I. (2012). The Long Way From alpha-Error Control to Validity Proper: Problems With a Short-Sighted False-Positive Debate. Perspectives on Psychological Science, 7(6), 661-669. doi: 10.1177/1745691612462587
- Grieneisen, M. L., & Zhang, M. (2012). A Comprehensive Survey of Retracted Articles from the Scholarly Literature. PLoS ONE, 7(10), e44118. doi: 10.1371/journal.pone.0044118
- Ioannidis, J. P. A. (2012). Why Science Is Not Necessarily Self-Correcting. Perspectives on Psychological Science, 7(6), 645-654. doi: 10.1177/1745691612464056
- John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524-532. doi: 10.1177/0956797611430953
- Martinson, B. C., Anderson, M. S., & de Vries, R. (2005). Scientists behaving badly. Nature, 435(7043), 737-738.
- Murayama, K., Pekrun, R., & Fiedler, K. (2014). Research Practices That Can Prevent an Inflation of False-Positive Rates. Personality and Social Psychology Review, 18(2), 107-118. doi: 10.1177/1088868313496330
- Steen, R. G., Casadevall, A., & Fang, F. C. (2013). Why Has the Number of Scientific Retractions Increased? PLoS ONE, 8(7). doi: 10.1371/journal.pone.0068397
- Redman, B. K., Yarandi, H. N., & Merz, J. F. (2008). Empirical developments in retraction. Journal of Medical Ethics: Journal of the Institute of Medical Ethics, 34(11), 807-809.
Wissenschafts-/Publikationspraxis
- Asendorpf, J. B., Conner, M., De Fruyt, F., De Houwer, J., Denissen, J. J. A., Fiedler, K., . . . Wicherts, J. M. (2013). Replication is More than Hitting the Lottery Twice. European Journal of Personality, 27(2), 138-144.
- Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The Rules of the Game Called Psychological Science. Perspectives on Psychological Science, 7(6), 543-554. doi: 10.1177/1745691612459060
- Bakker, M., & Wicherts, J. M. (2011). The (mis)reporting of statistical results in psychology journals. Behavior Research Methods, 43(3), 666-678. doi: 10.3758/s13428-011-0089-5
- Brand, A., Bradley, M. T., Best, L. A., & Stoica, G. (2008). Accuracy of effect size estimates from published psychological research. Perceptual and Motor Skills, 106(2), 645-649. doi: 10.2466/pms.106.2.645-649
- Buela-Casal, G. (2014). Pathological publishing: A new psychological disorder with legal consequences? European Journal of Psychology Applied to Legal Context, 6(2), 91-97. doi: 10.1016/j.ejpal.2014.06.005
- Harris, A., Reeder, R., & Hyun, J. (2011). Survey of Editors and Reviewers of High-Impact Psychology Journals: Statistical and Research Design Problems in Submitted Manuscripts. Journal of Psychology, 145(3), 195-209. doi: 10.1080/00223980.2011.555431
- Ioannidis, J. P. A., Munafo, M. R., Fusar-Poli, P., Nosek, B. A., & David, S. P. (2014). Publication and other reporting biases in cognitive sciences: detection, prevalence, and prevention. Trends in Cognitive Sciences, 18(5), 235-241. doi: 10.1016/j.tics.2014.02.010
- Ledgerwood, A., & Sherman, J. W. (2012). Short, Sweet, and Problematic? The Rise of the Short Report in Psychological Science. Perspectives on Psychological Science, 7(1), 60-66. doi: 10.1177/1745691611427304
- Makel, M. C. (2014). The Empirical March: Making Science Better at Self-Correction. Psychology of Aesthetics Creativity and the Arts, 8(1), 2-7. doi: 10.1037/a0035803
- Maner, J. K. (2014). Let’s Put Our Money Where Our Mouth Is: If Authors Are to Change Their Ways, Reviewers (and Editors) Must Change With Them. Perspectives on Psychological Science, 9(3), 343-351. doi: 10.1177/1745691614528215
- Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific Utopia: II. Restructuring Incentives and Practices to Promote Truth Over Publishability. Perspectives on Psychological Science, 7(6), 615-631. doi: 10.1177/1745691612459058