All Research>Manipulation-Proof Credit Scoring Algorithms in Kenya
Manipulation-Proof Credit Scoring Algorithms in Kenya
Financial InclusionKenya
Context
An increasing number of decisions are guided by machine learning algorithms. In digital credit applications, machine learning algorithms are frequently used to determine loan eligibility. In most cases, these are “black box” algorithms that are inscrutable to borrowers and regulators, creating scope for bias and abuse. Lenders argue that these algorithms cannot be made transparent, because it would lead to widespread gaming, which could undermine the viability of lending.
Study Design
This project developed new approaches to credit scoring (and other algorithmic decisions) that are not susceptible to gaming – even when the algorithm is totally transparent. The researchers tested this algorithm in Kenya, in partnership with the Busara Center for Behavioral Economics (Busara). They built a new smartphone app designed to (consensually but passively) collect data on how people used their phones–mimicking ‘digital credit’ products’ collection of user data for conversion into a credit score using machine learning. Busara then recruited over 1,500 people from the Nairobi area and offered financial rewards to complete challenges on the app that mirrored real-world incentives to strategically alter behaviors such as sending texts or receiving calls.
Results and Policy Lessons
The researchers found that people do manipulate their behavior when incentivized to do so and that some behaviors are more difficult or costly to manipulate than others. They also found that their algorithm, which factors in individuals’ propensity to manipulate particular behaviors, outperformed standard algorithms by 13 percent on average when the decision rules were made more transparent. The research shows how “manipulation-proof” machine learning algorithms can allow for more transparent decision rules, and provide regulators with ways to measure the cost of algorithmic transparency in real-world environments.