Consumer Financial Protection Bureau

Consumer Groups Call on CFPB to Protect Consumers from Discriminatory Algorithms Used by Banks and Other Financial Institutions to Make Credit Decisions

Biased Algorithms can Make Credit Unaffordable for Underserved Communities

WASHINGTON, D.C. – In a letter sent to the Consumer Financial Protection Bureau today, Consumer Reports and the Consumer Federation of America urged the Bureau to issue guidance making it clear that banks and other financial firms have an obligation to mitigate the discriminatory impact of algorithms they use to underwrite and price credit for consumers.  

“Financial institutions are increasingly using AI and machine learning (ML) models to make decisions about consumers, including whether they qualify for a loan and how much interest they’ll be charged to borrow money,” said Jennifer Chien, senior policy counsel at Consumer Reports. “While these models offer important potential benefits, they can reinforce and make worse existing and historical biases that prevent communities of color and low-income consumers from accessing affordable credit. As AI decision-making advances rapidly, the CFPB should provide clear guidance to ensure lenders treat consumers fairly and protect them from algorithmic discrimination.” 

“We agree with the CFPB that fair lending rules should apply to algorithmic lending,” said Adam Rust, Director of Financial Services at the Consumer Federation of America, “but too many credit applicants will be hurt if the approach is to wait to see what trouble emerges from black boxes, compared to one where clear expectations are established for how and when to search for fairer models. For traditional lending, the rules of the road are set. But unless the CFPB establishes clear guidance for testing and correcting algorithmic discrimination, it invites loopholes and evasions by lenders attempting to break things first and ask questions later.’ 

Requiring companies using algorithmic decision-making tools to take proactive steps to mitigate disparate impact is in keeping with established anti-discrimination laws, including the Equal Credit Opportunity Act (ECOA), which the CFPB implements. CFPB staff have noted on several occasions that companies using AI tools for underwriting should conduct a robust search for less discriminatory alternatives (LDA) as part of their fair lending compliance. However, the CFPB has not spelled out that regulatory expectation in writing, and industry compliance remains inconsistent at best.

Under existing disparate impact doctrine, companies can face liability if an alternative approach serves the same business need but with less disparate impact. With AI/ML, companies can now iterate much more quickly and effectively to find comparable alternative models that are fairer. 

The groups’ letter urges the CFPB to clarify that lenders have an obligation to search for and implement less discriminatory alternatives when algorithmic tools produce biased outcomes and provide ongoing guidance so companies follow best practices. Clear guidance will ensure lenders conduct robust searches for less discriminatory alternatives, while also supporting enforcement efforts. The letter details a number of steps the CFPB should take to clarify supervisory expectations regarding the search for LDAs:  

  • The CFPB should make clear that for lenders to fulfill the obligation to search for LDAs, they should take proactive steps during each stage of the model development pipeline to mitigate disparate impact.
  • The CFPB should provide ongoing guidance to shape expectations on how to properly search for LDAs, including regarding appropriate techniques to mitigate disparate impact and the frequency and intensity of LDA searches.
  • To address market uncertainty, the CFPB should provide guidance and examples on appropriate metrics and methodologies for measuring fairness. 
  • The CFPB should provide guidance on how to determine what is a viable LDA.
  • A range of vehicles, including supervisory highlights, should be used to share emerging best practices with the industry.  

Algorithmic discrimination can arise from many sources. Unrepresentative, incorrect, or incomplete training data, as well as data that reflects historical biases, can lead to discriminatory outcomes. Biases can also be embedded into models through the design process, such as via improper use of protected characteristics directly or through proxies.  Choices made during the model development process can also affect its predictiveness regarding particular populations. The issue of potential discrimination is further compounded by the lack of transparency of complex ML models.

The CFPB has begun issuing important and concrete guidance on a range of AI-related topics, including the technology’s use in marketing and requiring that consumers receive clear explanations for adverse credit decisions. The Bureau has also provided insights on cutting-edge issues, such as a spotlight on risks to consumers from the use of AI chatbots. However, the CFPB has been noticeably less clear on important questions related to the testing of AI/ML models for lending to mitigate discrimination, including disparate impact.

###

 Founded in 1936, Consumer Reports (CR) is an independent, nonprofit and nonpartisan organization that works with consumers to create a fair and just marketplace. Known for its rigorous testing and ratings of products, CR advocates for laws and company practices that put consumers first. CR is dedicated to amplifying the voices of consumers to promote safety, digital rights, financial fairness, and sustainability. The organization surveys millions of Americans every year, reports extensively on the challenges and opportunities for today’s consumers, and provides ad-free content and tools to 6 million members across the U.S.

The Consumer Federation of America (CFA) is an association of non-profit consumer organizations established in 1968 to advance the consumer interest through research, advocacy, and education. Today, more than 250 of these groups participate in the federation and govern it through their representatives on the organization’s Board of Directors. As an advocacy organization, CFA seeks pro-consumer policies on various issues before Congress, the White House, federal and state regulatory agencies, state legislatures, and the courts. We communicate and work with public officials to promote beneficial policies, oppose harmful ones, and ensure a balanced debate on issues important to consumers.