Support Us

MetaLAB: Crowdsourced Meta-Analyses

Adorable toddler playing interactive games for good development at home. glasses on a newspaper © dusanpetkovic1 – stock.adobe.com

Aggregating data across studies is important for the generation of cumulative evidence, but can be a challenge in fields such as language acquisition in which sample sizes are characteristically small and study populations may be very diverse. MetaLab is an online repository of meta-analyses and a platform that can improve the generation of robust evidence by addressing low statistical power and facilitating appropriate research planning, key aspects of replicable research.
Christina Bergmann and colleagues used MetaLab’s collection of 12 standardized meta-analyses on language development in children between birth and 5 years of age to assess typical effect sizes, sample sizes, statistical power, and methodological choices in the field. With a median effect size of Cohen’s d= 0.45 and a typical sample size of 18 participants per cell, the study found that observed power was only 44%, much lower than most recommendations. In particular, the findings suggest that most studies testing infants and toddlers are significantly underpowered, even when aiming to detect only a main effect. To improve the replicability of developmental research, the authors recommend prospectively calculating power, carefully selecting methods, reporting all data, and increasing the use and availability of meta-analyses to generate cumulative evidence.

Researchers
  • Christina Bergmann
  • Sho Tsuji
  • Molly Lewis
  • Mika Braginsky
  • Page Piccinini
  • Alejandrina Cristia
  • and Michael C. Frank
Partners
  • MetaLab
Timeline

2016 — ongoing

Share Now

Copyright 2024. All Rights Reserved

Design & Dev by Wonderland Collective