Skip to content
Longterm Wiki
Back

Research from Lehigh University

web

Relevant to AI safety discussions around algorithmic fairness, deployment risks in high-stakes domains, and the governance challenges of holding automated decision systems accountable for discriminatory outcomes.

Metadata

Importance: 52/100news articlenews

Summary

Lehigh University research investigates racial bias in AI-driven mortgage underwriting systems, finding that algorithmic decision-making in lending perpetuates or amplifies discriminatory outcomes against minority applicants. The study highlights how automated systems can encode and reproduce historical patterns of racial discrimination in financial services.

Key Points

  • AI mortgage underwriting systems demonstrate measurable racial bias, disadvantaging minority applicants in loan approval decisions.
  • Algorithmic automation does not eliminate human bias but may instead institutionalize and scale discriminatory patterns.
  • The research raises concerns about accountability gaps when consequential financial decisions are delegated to opaque AI systems.
  • Findings have implications for fair lending regulations and the need for bias auditing of AI systems in high-stakes domains.
  • Illustrates the broader challenge of deploying AI in domains with historically discriminatory practices without adequate bias mitigation.

Cited by 1 page

Cached Content Preview

HTTP 200Fetched Apr 9, 20267 KB
AI Exhibits Racial Bias in Mortgage Underwriting Decisions | Lehigh University News 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Search the Lehigh Website Search 
 X 
 
 
 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 

 Credit: Moor Studio / iStock

 
 

 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 

 
 

 

 
 
 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 AI Exhibits Racial Bias in Mortgage Underwriting Decisions 

 

 
 
 
 Getting your Trinity Audio player ready... 
 
 
 LLM training data likely reflects persistent societal biases, but simple fixes can help, according to findings from Donald Bowen III, McKay Price and Ke Yang.

 
 
 
 
 
 Share This Story 
 
 
 
 
 
 
 
 Story by 
 University Communications 
 

 
 
 Photography by 

 Moor Studio / iStock

 
 
 
 
 
 Posted on 
 
 August 20, 2024 
 
 
 
 
 
 
 
 
 
 Tags 
 
 Academics 
 
 
 College of Business 
 
 
 Faculty 
 
 
 Research 
 
 
 Artificial Intelligence 
 
 
 
 
 
 
 
 
 
 
 Putting AI to use in mortgage lending decisions could lead to discrimination against Black applicants, according to new research. But researchers say there may be a surprisingly simple solution to mitigate this potential bias. 

 In an experiment using leading commercial large language models (LLMs) to evaluate loan application data, Lehigh researchers found that LLMs consistently recommended denying more loans and charging higher interest rates to Black applicants compared to otherwise identical white applicants. 

 This discovery is particularly alarming given the historical and ongoing racial disparities in homeownership. 

 “This finding suggests that LLMs are learning from the data they are trained on, which includes a history of racial disparities in mortgage lending, and potentially incorporating triggers for racial bias from other contexts,” said Donald Bowen III, assistant professor of finance in the College of Business and one of the authors of the study . 

 The study used real mortgage application data, drawn from a sample of 1,000 loan applications included in the 2022 Home Mortgage Disclosure Act (HMDA) dataset, to create 6,000 experimental loan applications. In the experiment, researchers manipulated race and credit score variables to determine their effects. 

 The results were stark: Black applicants consistently faced higher barriers to homeownership, even when their financial profiles were identical to white applicants. 

 Based on the experimental results using OpenAI’s GPT-4 Turbo LLM, Black applicants would, on average, need credit scores approximately 120 points higher than white applicants to receive the same approval rate, and about 30 points higher to receive the same interest rate. 

 Models also exhibited bias against Hispanic applicants, generally to a lesser extent than against Black applicants. 


... (truncated, 7 KB total)
Resource ID: edca1d403eb2dff5 | Stable ID: sid_srOIeCEe7i