One channel through which the digitization of government-to-person (G2P) payments will promote financial inclusion is by allowing the provision of low interest credit collateralized by future payments. These products are easy to implement, but estimating their impact on financial inclusion is hard. Traditionally, the measurement of financial inclusion in economics has been accomplished with the collection of household survey data. Unfortunately, household survey data is time consuming to collect and analyze, expensive, and lessons learned in one context may not translate well to another.
The estimation of the welfare impacts of new financial technologies with the speed and cost of administrative data collection would enable rapid evaluation of the effects of technologies on financial inclusion. While there is an ongoing debate in behavioral economics over the reasonableness of this assumption, the primary testable implication of this assumption -- that the only channel through which shocks affect a household's take-up of a trusted financial technology with known returns is the household's estimated intertemporal marginal rate of substitution (IMRS) -- has not yet been tested.
A necessary first step is to test the underlying assumption that the IMRS which households use for choices between small mobile money payments over the phone is the same as the IMRS which households use for other economic decisions. This pilot tested this assumption by eliciting IMRS while experimentally shocking household liquidity with cash transfers (CT) and by providing households with access to a commitment savings account (CS).
350 household heads in Nairobi received access to 8 week commitment savings accounts. Participants were able to deposit their money at their preferred rate into an M-PESA savings account. Interest rates were randomly assigned to participants at -3%, 0%, and 20% monthly interest and cash transfers were cross-randomized (40% received an 8,000 KSH cash transfer (approximately 80 USD), while 60% received no cash transfer). The research team elicited IMRS by using money earlier or later games (MEL games, common in behavioral economics, are short questions involving tradeoffs of money today for money in the future).
At Week 0, participants visited Busara Lab where they participated in lab experiments consisting of a brief demographic module, a brief income and expenditure module, and MEL games. In Weeks 1, 2, 4, 6, and 8, participants completed phone surveys with MEL games and the brief income and expenditure module. In Week 2, participants either received (or did not receive) their cash transfers, and in a separate call were on-boarded into the commitment savings accounts before participating in the phone survey. In Week 10, commitment savings payouts were made into participants’ savings accounts, and participants participated in a final phone survey with qualitative questions on their experience and savings decisions during the 10 week study period.
Results and Policy Implications
Results from this pilot study found that eliciting IMRS by using standard MEL games turned out to be much noisier and biased than anticipated. There was little correlation between what the households were saying (with respect to their savings decisions) during the MEL games and what they were actually exhibiting via deposits into their savings accounts. In response, the research team turned to theory for guidance on complementary approaches to measuring welfare gains that do not depend on precise and/or accurate measurement of IMRS.
In the context of this study, households appeared to respond much less to interest rates than was expected. Households responded to increases in interest rates primarily by increasing the amount they deposited in their commitment savings account, but whether or not the households saved was unaffected.
Interestingly, households given the -3% monthly interest rate were more likely to deposit into their savings account than those households given the 0% monthly interest rate. The research team believes this was because during onboarding for the commitment savings product, households offered the -3% interest were more likely to ask questions about the savings product, increasing their familiarity with it.
The existing literature and traditional approaches to measuring IMRS did not work well in the context of this pilot study. Is it a problem with measurement (i.e. how MEL games are conducted)? Or is there something fundamentally wrong with using the neoclassical model for understanding savings and borrowing decisions?
New promising approaches and tests for understanding how people make borrowing and savings decisions can give researchers a more accurate way to measure “willingness to pay”. There are numerous factors that contribute to why people choose to save, but all factors are ultimately channeled through “willingness to pay” – which is what determines someone’s willingness to save. There is a lot of value to further evaluating behavioral methodology and interventions that can inform the public and private sector about household savings and debt paying decision making processes and how to best assist these households.
January - May 2017