Support Us

Development Intervention Forecasting

Kenya

Two women and a child walking in Arusha, Tanzania. © Elena – stock.adobe.com

Policy Context

Who knows what about the impacts of large policy interventions? Development interventions have traditionally relied on expert knowledge. Recent concerns about generalizability, transferability, and replication of experimental findings emphasize the need to shift from simply evaluating interventions to developing new strategies to understand what works where.

Forecasts of experimental results can provide important insights into the selection of policies, the design of experiments, and the production of knowledge (DellaVigna et al., 2019). To policymakers, forecast accuracy can signal whose recommendations should be given more weight. To researchers, accurate forecasts can inform which policies should be evaluated. In the case of a null result (no significant findings), expert forecasts can highlight why a finding is interesting, potentially mitigating publication bias. Finally, forecasts of experimental effects can quantify how much new information a study produces, since many results seem obvious after the fact.

In this space several basic questions remain unanswered, such as 1) are locals or academic experts more knowledgeable about the causal effects of interventions, and 2) what empirical strategies can be employed to extract the most information about the effects of these interventions prior to launch?

Study Design

Researchers partner with teams running several large pre-registered randomized evaluations (RCTs) in Kenya, including a field experiment on the general equilibrium effects of unconditional cash transfers by CEGA Faculty Director Edward Miguel. They then collect predictions of treatment effects on a range of outcomes from traditional academic experts, locals similar to intervention recipients, and several other groups. Finally, researchers assess ex-post accuracy of predictions among these various groups by comparing elicited predictions to actual experimental treatment effects.

Results and Policy Lessons

This study uses 20,000 forecasts of 50 causal effects from three large experiments in Kenya made by academics, people similar to intervention recipients, and nonexperts to examine belief accuracy. Researchers find that average predicted effects track experimental results well. Recipient types are less accurate than academics on average but are at least as accurate for interventions and outcomes that are likely to be more familiar to them. The mean forecast of each group outperforms more than 75% of the comprising individuals, and averaging just five forecasts substantially reduces error, indicating strong “wisdom-of-crowds” effects. Three measures of academic expertise (rank, citations, and conducting research in East Africa) and two measures of confidence do not correlate with accuracy. Among recipient-types, high-accuracy “superforecasters” can be identified using observables. Small groups of these superforecasters are as accurate as academic respondents.

For more detail, see the full paper.

Researchers
Timeline

2018 — 2021

Share Now

Data Science for Development

Learn more about our work from this theme

Work & Education

Announcing the launch of the Social Science Prediction Platform

post   |   Work & Education
Work & Education

Emerging benefits and insights from a year of forecasting on the Social Science Prediction Platform

post   |   Work & Education
Work & Education

The Future of Forecasting — Highlights from the BITSS Workshop

post   |   Work & Education

Get the Resources

CEGA

Slides: Forecasting Social Science Research Results - Stefano DellaVigna (SEEDEC 2019)

Presentations
CEGA

Video: (Keynote) Forecasting Social Science Research Results - Stefano DellaVigna (SEEDEC 2019)

CEGA

Working Paper: Forecasting the Results of Experiments: Piloting an Elicitation Strategy

Research Publications

Copyright 2024. All Rights Reserved

Design & Dev by Wonderland Collective