Skip to Main Content

An Interview with Stephanie Kelley

An Interview with Stephanie Kelley

Anti-discrimination Laws, AI, and Gender Bias: A Case Study in Non-mortgage Fintech Lending

How would you describe the overarching concepts in your article, Anti-discrimination Laws, AI, and Gender Bias: A Case Study in Non-mortgage Fintech Lending, in their most simplified way?

The study is centered on the use of artificial intelligence (AI) models used to make non-mortgage lending decisions – essentially, using AI models to predict whether someone should be given a credit card or an auto loan. Generally, financial technology firms –or “fintechs”– are the ones relying on AI models to make their lending decisions, because, unlike a more traditional lender (e.g. a bank) fintechs don’t have access to extensive historical banking information on a given applicant, and instead must rely on alternative forms of data (and the predictions from AI models) to predict default. Notably, many fintechs offer financial services to people who aren’t serviced by traditional banks (the “unbanked”) – this could be everyone from an entrepreneur with non-traditional credit history to new immigrants to the country with no historical financial data in Canada.

Back in 2017 I started studying AI ethics in general, and through this work I had a number of industry professionals in the financial services industry tell me about their concerns with potentially biased or discriminatory lending models. Most interestingly, these professionals stated they were confident they were adhering to the relevant regulations, but were seeing discriminatory outcomes when it came to gender. After some digging, my co-authors and I noticed many of the relevant regulations around anti-discrimination were laws put in place long before AI models were used in financial services. What our colleagues then suggested was some anti-discrimination laws could paradoxically be harmful to the people they’re trying to protect, when applied to AI models for decision making. Essentially, the same law applied to a human-based decision might have a different impact, or potentially even the exact opposite impact than intended when applied to an AI-based model.

The study itself involved generated machine learning and traditional statistical lending models to predict applicant defaults. The models were trained using real applicant data from a lending that operates in a country that allows for the collection and use of gender in lending models. We then simulated the impact of different anti-discrimination laws and their model and data governance guidance, and found that indeed, some laws which prohibit the use and collection of gender, paradoxically increase discrimination against women when applied to AI-based lending models. Further, we investigated how firms could reduce the discrimination in their model outcomes, whilst managing firm profitability.  

 

What was the motivation to speak specifically to this matter?

I was motivated to research fintech discrimination after conversations about it during my interactions with industry and the subsequent conversations therein regarding gender. Once we found the right data via a lender who’d gathered gender specific information, we could simulate the problem thought to be facing industry, and make recommendations on how to reduce the discrimination, whilst allowing firms to manage their profitability, something often overlooked in algorithmic discrimination research from computer science.

Additionally, discrimination in the non-mortgage lending space is a bit of an untapped area as much of the information needed to do research (e.g., the protected attributes – like gender, or race) are forbidden to be collected due to prohibitions in regulations. In Canada, it’s uncommon to have extensive gender data about lending applicants because it’s strongly protected by privacy regulations. What our research finds though is that in this setting, the minority population, women, would be better off if firms were able to collect gender and use it in the AI models. This is because in this setting, women are more creditworthy, but are the minority population, and without access to gender, the AI model isn’t able to correctly differentiate between some men and women.

What we do recommend is that greater responsibility needs to be assigned to firms collecting this type of sensitive data, in order to maintain accountability for reducing discrimination across fintech lending.

 

Who are your collaborators and where are they located?

I worked with three collaborators on this study:

Anton Ovchinnikov from the Smith School of Business, Queen’s University (he’s also affiliated with INSEAD), and two individuals at our research partner, the Union Bank of the Philippines – David R. Hardoon, and Adrienne Heinrich. David and Adrienne are both key members at the AI Center of Excellence at Aboitiz Data Innovation.

 

Is this part of an ongoing collaboration?

Yes! We’re collaborating with the Union Bank of the Philippines and Aboitiz Data Innovation to investigate several AI ethics risks across the bank. Currently we’re investigating model explainability - explainability being how humans can understand the outcomes made by AI-based models (“looking under the hood” of the model, so to speak). AI algorithms are developed by data scientists, and we’re looking at how actual users of those explanations (in this case, lending officers) react to different forms of model explanations. What we know is most of these explanation methods have been conceived in a data science vacuum, and assumed to be understood and useable by non-data scientists. What we’re looking at is whether that assumption is correct, by studying the preferences of lending officers. We’re then going to evaluate the operational feasibility of implementing the visualizations of the lending officers.

 

Do you have future plans for this research? If so, what are they?

Yes, the overarching plan for my work on AI ethics in the financial services space is to inform organizations on methods to reduce ethics risks, and to work with regulators who are developing new laws in this space. Currently I'm working with The Office of Superintendent of Financial Institutions regarding the development of regulation in this space as it pertains to Canada, and the findings from this body of work have already been implemented into the AI ethics guidelines in Singapore by the Monetary Authority of Singapore.

 

Read more about the article here

Connect with Ivey Business School