Consumer Financial Protection Bureau

The CFPB Has An Opportunity to Greatly Advance the Ethical and Non-Discriminatory Use of AI in Financial Services and Should Take It

By Brad Blower and Adam Rust

On October 30, 2023, the White House issued Executive Order 14110 entitled the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence.  The Executive Order (EO) is sweeping in its call for the executive branch and independent federal agencies to work on fostering the use of ethical artificial intelligence (AI).  One of the purposes of the EO is to ensure that the use of AI is consistent with the “administration’s dedication to advancing equity and civil rights.”  Section 2(d).

Even though it is hard to find solutions in an area where change is occurring so rapidly, some fixes are evident. The CFPB can significantly advance the purpose of the EO by providing guidance on the non-discriminatory use of artificial intelligence.

Given how rapidly AI is advancing, the CFPB should act expeditiously.  The CFPB has demonstrated its commitment to proceed at a deliberate pace, but the market is impatient. Financial institutions are deploying generative AI for other purposes, but uncertainty remains about addressing longstanding concerns about digital redlining and black boxes. The very limited guidance the CFPB has issued over the last three years since it sought comments on the ethical use of AI to combat discrimination has not materially advanced equity, nor has it clarified what constitutes effective disparate impact monitoring for consumers or the financial services sector. If it has had any impact, the CFPB has not commented publicly. The agency should act soon to take substantive steps to correct this oversight by issuing written guidance to address technologies and standards for oversight of fairness or, at a minimum, providing examples in supervisory highlights and fair lending reports of compliant standards and oversight techniques it has observed in the marketplace.

Although the EO sets out mandatory timetables for executive branch agencies to issue reports and guidance, the language related to the independent federal agencies, such as the CFPB, is far less specific and discretionary. There are only two sections of the EO covering financial services. Nonetheless, both sections are very specific in their ask of the CFPB and other independent agencies. In Section 7.3(b), the Federal Housing Finance Agency (FHFA) and the CFPB are “encouraged to consider using their authorities to require their respective regulated entities, where possible, to use appropriate methodologies using AI tools to ensure compliance with Federal law.” And in Section 8, it calls on independent regulatory agencies to consider using their “full range of authorities to protect American consumers from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of AI, including risks to financial stability, and to consider rulemaking, as well as emphasizing or clarifying where existing regulations and guidance apply to AI, including clarifying the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use, and emphasizing or clarifying requirements and expectations related to the transparency of AI models and regulated entities’ ability to explain their use of AI models.”

We believe there is a significant need for the CFPB to exercise its authority to provide more specific guidance, particularly on what constitutes adequate technologies and standards for monitoring for disparate impact under the Equal Credit Opportunity Act (ECOA), the primary fair lending law it enforces. To date, the CFPB has been reluctant to provide any written guidance on compliant methodologies and standards. By providing more specific guidance rather than leaving it up to lenders to determine on their own what is compliant, the CFPB will further not only the purposes of the EO, but also limit the possibility of any race to the bottom by the portion of the financial service industry that preys on low-to-moderate income and BIPOC consumers. Because these populations suffer the effects of discrimination, taking action could prevent algorithmically-driven lending from repeating longstanding patterns of discrimination in traditional lending.

Guidance could transform ambiguity into clarity. A recent report by FinRegLab, a non-profit innovation center that tests new data and techniques to inform public policy debates related to the ethical development of financial services, found several areas where additional guidance from the regulators could be helpful. The report noted (p. 41) that the CFPB had issued circulars in 2022 and 2023, which stressed the importance of comparing outcomes between protected classes and white applicants and addressed the use of “post hoc” tools to explain a model’s decision. Nonetheless, the report also found that financial services companies were “using their best judgment in implementing methodologies for testing because there was no federal guidance.” FinRegLab also noted there was no written standard for when an institution should search for a less discriminatory alternative (LDA) or for when certain technologies such as “debiasing” could be used in improving the fairness of a model.  (p. 62-63). Through peer-reviewed research, computer scientists have developed the means to test model fairness and explainability. The needed step is for the CFPB to provide clear guidance on how financial institutions should use them.

Again, the clock is ticking. The CFPB has not issued any substantive guidance on what methodologies and standards further compliance with ECOA, even though more than three years have passed since it sought comments on how it could more effectively promote and oversee effective compliance. Both advocacy groups and financial services companies submitted comments in response to the CFPB’s request and afterward, which included that the CFPB provide guidance on methodologies and techniques for developing and monitoring compliant models, indicate when lenders should search for a less discriminatory alternative (LDA) and offer quantitative thresholds for what constitutes practically-significant disparate impact that warrants further review by the lender.  In the three years since stakeholders responded to the CFPB’s request for information, the CFPB has not issued written guidance on any of these issues except for the two circulars discussed above, which stressed very generally the importance of validating tools and explanations for decisions that impacted consumers.

We must acknowledge one helpful piece of verbal guidance provided by a senior CFPB leader at several conferences who noted that “rigorous searches” for LDAs were an important component of fair lending compliance, but must emphasize that there has been no written guidance from the CFPB on LDAs.

The interests of the public are compromised when well-intentioned actors wait for clarity, while others move ahead, break things, and ask questions later.

These are the costs associated with ambiguity. Mid-size and smaller financial institutions will not invest their resources, hire staff, and move forward without more clarity. The result will be a market dominated by large banks and disrupters. Currently, a handful of financial institutions build their models in-house. Another group, motivated by the opportunity to tap these technologies but without the resources to do so internally, hires third-party vendors. Nonetheless, many financial institutions remain on the sidelines. Practically speaking, this divides markets into the haves and have-nots, with a higher share of the second group consisting of smaller banks. Also, because AI and big data can potentially better serve credit invisibles, the lack of guidance leads to inaction by many in the market and therefore, is a roadblock for the market to advance greater financial inclusion.

The CFPB’s reticence to provide disparate impact guidance appears to be for three reasons.  The primary reason is a practical one – the CFPB may be fearful that it will be sued by conservative trade associations if it issues such guidance. The CFPB has already been sued by trade associations seeking to obstruct its ability to implement several rules, including its small business data collection rule and payday lending rule.

The other two reasons are far more speculative. The CFPB believes that its role is not to hold the hand of the financial services industry by providing thresholds below which no agency action would be likely. We have also heard a concern from some at the CFPB and in the advocacy community that unscrupulous financial services companies could “game” the system by ensuring that their use of AI, including machine learning models, would lead to outcomes below CFPB recommended thresholds, even in scenarios where their practices had a significant discriminatory impact. The latter concern could inform their reticence to provide more specific guidance on thresholds.

We believe the CFPB can address all three of these concerns by fulfilling a core function of its mission, providing summaries of its findings from its supervisory work in either its Supervisory Highlights or Fair Lending reports. It should provide examples of fair-lending compliant methodologies and AI oversight. While the market will certainly draw insight from enforcement actions, those should not be the only data points. Illustrative case studies, published in a form that cloaks the lender’s identity, could appear in supervisory highlights and annual fair lending reports to identify where practices are working and perhaps even more importantly, where they may not be. The independent monitorship of one lender, for example, recently brought to the attention of the public the complexities associated with using education data. At a high level, the CFPB should discuss how lenders are balancing the goals of financial inclusion and accuracy. The market would derive value from understanding the CFPB’s views on the complexities of using AI to comply with the requirements of ECOA and its implementing Regulation B.

Taking these steps to address ambiguity is a reasonable approach to supervising this market. It would not cross the threshold of instructing lenders on which specific algorithms or metrics they should use, nor would it amount to “picking losers and winners” among third-party vendors. In fact, understanding the CFPB’s views could free lenders to innovate with confidence. In practice, this could lead to pro-consumer outcomes and advance the ethical and non-discriminatory use of AI. For example, mid-size and smaller financial institutions that might have felt uneasy devoting resources to adopt more complicated but also more inclusive models would re-evaluate the risk-reward investment decision. That could help level the playing field, bringing immediate benefits to community banks and possibly enhancing financial inclusion in the communities they serve.

The CFPB can also look to the Federal Trade Commission’s (FTC) December 19, 2023 settlement with Rite Aid as an example of using agency authority to provide guidance to limit the discriminatory use of AI. The FTC found that Rite Aid had inadequate safeguards in place on its use of AI biometrics to spot shoplifters, which led to the false identification of women and people of color. Just as this enforcement action can help prevent similar negative discriminatory behavior from others, providing examples of positive and problematic use of AI monitoring by the CFPB could encourage others in the market to take note.

Although providing such information will not address the longstanding calls for specific guidance from the CFPB, it will highlight what the agency has observed in the marketplace which would be very helpful in the interim.  It seems unlikely a conservative group could successfully challenge in court the marketplace observations of the CFPB. Similarly, we doubt a financial service company could somehow “game” observations or use them in a way that would be inconsistent with fair lending principles.  Nor would such observations inappropriately “hold the hand” of financial sector companies.

These examples could include descriptions of techniques lenders have used to achieve fairness goals without significantly compromising the accuracy of their underwriting models.  In the wake of Dodd-Frank, the sustainability of a loan, including an accurate assessment of the borrower’s ability to repay the debt, is a core contributing factor to fairness. The CFPB should also share examples of searches for LDAs that have been employed constructively in the industry. In sum, by providing a bit more transparency, the CFPB would be relaying helpful market information and promoting the purpose of the EO as well as serving consumers. Despite our divisive and litigious culture, such a move by the agency would be relatively non-controversial and provide a clearer path for the ethical use of AI to promote financial inclusion.

 

Brad Blower is the Founder of Inclusive-Partners LLC which advises non-profits and for-profits on financial inclusion and the ethical use of AI.  Adam Rust is the Director of Financial Services at the Consumer Federation of America.