Support Us

Correcting Effect Sizes for Publication Bias in Meta-Analyses

Publication bias is a substantial barrier in assessing the credibility of research, particularly for meta-analyses, as it yields overestimated effects and may generate false positives. Although there is consensus that publication bias is widespread, knowledge of how strongly it impacted various bodies of literature is limited. In this project, Robbie van Aert, Jelte Wichters, and Marcel van Assen assessed the extent of publication bias in a large-scale data set of 83 meta-analyses published in Psychological Bulletin (representing meta-analyses from psychology) and 499 systematic reviews from the Cochrane Database of Systematic Reviews (representing meta-analyses from medicine). Psychology is compared to medicine, because medicine has a longer history than psychology with respect to preregistration of studies as an effort to counter publication bias. They systematically studied the severity of publication bias and its inflating effects on effect size estimation by applying multiple publication bias tests and the p-uniform method. The rank-correlation test, Egger’s test, the test of excess significance, and p-uniform’s publication bias test yielded evidence for publication bias in a small proportion of the included meta-analyses. The authors find little evidence of overestimation of effect sizes due to publication bias, concluding that evidence for publication bias in the included meta-analyses is weak at best.

Timeline

2016 — ongoing

Share Now

Copyright 2024. All Rights Reserved

Design & Dev by Wonderland Collective