Background: Concerns about the reproducibility of empirical findings in Psychology permeate the scientific landscape. Specifically, mechanisms that are related to the publication process (i.e., dissemination biases), such as... [ view full abstract ]
Background: Concerns about the reproducibility of empirical findings in Psychology permeate the scientific landscape. Specifically, mechanisms that are related to the publication process (i.e., dissemination biases), such as publication bias, selective reporting, or the decline effect pose a substantial threat to the credibility and validity of observed effects. However, evidence about the prevalence and strength of such biases, particularly in terms of the decline effect, is still comparatively sparse. In order to contribute to filling this gap, we present here evidence from a meta-meta-analysis examining dissemination biases in general and decline effects in particular in all retrievable meta-analyses that have been published in the journal Intelligence so far.
Methods: We included data from all meta-analyses reporting standard effect size estimates (e.g., Cohen d, Pearson r) that have been published in the journal Intelligence (18 studies, 31 effects, N = 680000+). In a first step, potentially eligible studies were coded twice by the same researcher [JP] to retrieve summary effects of investigated research questions, covered time-span, and the number of included studies. Moreover, impact factors, citation counts, and sample sizes of initial studies (i.e., the first published study of a respective meta-analysis) were retrieved. This approach allowed us to investigate absolute differences from initial study effects and summary effect sizes as well as comparisons between strength differences of declining vs. non-declining effects. In a second step, data of primary studies (i.e., when effect sizes, precision measures, and publication year were provided in data tables or supplements; 77% of eligible studies) were extracted twice independently by two researchers [JP; MS]. This approach allowed us to investigate average annual effect changes rates (and declines) as well as potential non-linearity of effect changes by means of segmented line regressions (i.e., joinpoint regressions).
Results: The absolute difference between initially published findings and meta-analytical summary effects (henceforth: crude differences) averaged r = .16 for all included primary studies and .20 when grey literature was excluded. Out of 31 eligible effect sizes, 18 indicated crude effect declines (i.e., initial studies vs. summary effects) whilst 10 effects showed increases (3 effects changed signs). Crude differences of summary effects from declining effects were substantially stronger than increasing effects (i.e., r = .19 vs. .09) and were negatively associated with sample sizes of initial studies, although associated p-values did not reach nominal significance. Primary effect size differences of meta-analyses covering longer time-spans were significantly associated with lower cross-temporal regression coefficients, whilst segmented line regressions did not yield significantly differing slopes over time of primary effect sizes in almost all meta-analyses.
Discussion: We show in the present meta-meta-analysis that concerns about biased effect size estimates in the available Psychological literature seem to be justified. Effect sizes of initially published studies should be interpreted with caution before evidence from further replications is available because average differences from summary effects amount to non-trivial small to moderate effect strengths. Effect differences appear to be stronger for effect declines and when initially published studies have smaller sample sizes.