Lael Brainard

The use of complex machine learning models in banking holds pitfalls as well as opportunities, says Lael Brainard.

Regulators need to help facilitate a better understanding of the risks involved as the use of artificial intelligence (AI) in financial services increases, according to US Federal Reserve governor Lael Brainard.

A public consultation on the issue by federal agencies was likely in the future, Ms Brainard said. She was speaking at the AI Academic Symposium online event on January 12.

The Fed has been exploring whether additional supervisory clarity is needed to facilitate responsible adoption of AI, Ms Brainard said. The US central bank is seeking input from a wide range of stakeholders from financial services firms to civil rights groups.

Ms Brainard said that the Fed has been working with the other banking agencies on a possible interagency public consultation on the risk management of AI in financial services.

In terms of the “potential and pitfalls” of using AI in financial services, Ms Brainard cited “the lack of model transparency” as a primary concern.

Complex models

Ms Brainard said that certain machine learning models – such as neural networks – can be so complex they offer little or no insight into how they work.

An explanation of a model requiring a PhD in maths or computer science that may be suitable for developers but it is probably of little use to a compliance officer responsible for overseeing risk management across a wide swath of bank operations, she added.

[A bank’s management] needs to have confidence that a model used for crucial tasks... is robust and will not suddenly become erratic

Lael Brainard, US Fed

Ms Brainard’s observation cuts across a swath of banking operations. For instance, a bank’s management needs to be able to rely on a model’s predictions and risk classifications.

“[A bank’s management] needs to have confidence that a model used for crucial tasks such as anticipating liquidity needs or trading opportunities is robust and will not suddenly become erratic,” Ms Brainard said

She added that there needs to be certainty that model will not generate grossly inaccurate predictions when it confronts inputs from the real world that differ in some subtle way from the training data or that are based on a highly complex interaction of the data features.

Fraud prevention

On the operational side, banks are increasingly using AI for fraud prevention where, according to the Federal Trade Commission, people reported losing more than $1.9bn from such crimes in 2019. “AI-based tools may play an important role in monitoring, detecting, and preventing such fraud,” Ms Brainard said.

However, she devoted a substantial proportion of her speech to credit scoring, noting that AI could enhance financial inclusion, but also risks producing scores that are discriminatory.

“If AI models are built on historical data that reflect racial bias or are optimised to replicate past decisions that may reflect bias, the models may amplify rather than ameliorate racial gaps in access to credit,” she warned.

Ms Brainard cited a 2019 study by Science magazine revealing that an AI risk-prediction model used by the US healthcare system was fraught with racial bias. A key focus for the Biden administration is tackling racial inequality.

“While the black box problem is formidable, it is not, in many cases, insurmountable,” she added.

A version of this article first appeared in The Banker's sister publication Global Risk Regulator.

PLEASE ENTER YOUR DETAILS TO WATCH THIS VIDEO

All fields are mandatory

The Banker is a service from the Financial Times. The Financial Times Ltd takes your privacy seriously.

Choose how you want us to contact you.

Invites and Offers from The Banker

Receive exclusive personalised event invitations, carefully curated offers and promotions from The Banker



For more information about how we use your data, please refer to our privacy and cookie policies.

Terms and conditions

Join our community

The Banker on Twitter