Skip to main content
Research in Brief Home

Anti-discrimination Laws, AI and Gender Bias in Fintech Lending

Published: 2022

Anton Ovchinnikov
Professor & Distinguished Professor of Management Analytics

Key Takeaways

  • To reduce discrimination in machine learning models, algorithmic decision making should include responsibly-collected data on protected attributes
  • Antidiscrimination laws require updating to better reflect current algorithmic decision-making models that determine lending outcomes
  • Collecting protected information may, in fact, reduce discrimination and increase profitability, according to research in the nonmortgage fintech lending setting.

Large-scale organizational decision-making is increasingly conducted by algorithms and artificial intelligence (AI), a trend accompanied by accusations of discrimination from consumers and the media. Antidiscrimination laws exist to protect vulnerable groups, yet these laws have not been reviewed in the current context of machine learning (ML) and AI. Research findings reveal addressable discrepancies between current antidiscrimination regime objectives and their impact when tasking algorithms with decision-making. Anton Ovchinnikov and his co-authors assessed the impact of these outdated antidiscrimination laws in the nonmortgage fintech sector, focusing on three regimes meant to prevent gender-based lending bias.

The three regimes represented global approaches to collecting gender-related data in the credit application context. Regime 1 allowed for gender data collection and use in AI models; Regime 2 permitted gender data collection but prohibited its use as a feature in training and screening models employed for individual lending decisions; and Regime 3 prohibited the collection and use of gender data. Using ML models trained on a rich, publicly accessible data set, the researchers simulated these three antidiscrimination regimes, measuring their effect on model quality and firm profitability.

Ovchinnikov and his co-authors found that regimes that banned using gender data (like those in the United States) encountered significantly more discrimination and slightly less profitability. In all regimes, ML models are less discriminatory, achieve better predictive quality, and are of higher profitability than traditional statistical methods. This is due to particular features built into the model’s training: hyperparameter tuning, feature engineering, and feature selection. Collecting protected information may, in fact, reduce discrimination and increase profitability, according to research in the nonmortgage fintech lending setting.

Significantly, the research underscores the pressing need to revisit antidiscrimination laws in light of the increasing deployment of algorithmic decision-making in nonmortgage consumer lending. In terms of gender, responsible data collection and use should be allowed in these contexts, as in Regime 1. For lenders, the project provides guidance on reviewing existing algorithm design and data usage.