AI risks

Federal regulators seeking details about how tech is employed in fraud prevention and credit underwriting.

Financial institutions are being asked about how they use artificial intelligence (AI) by five US federal regulatory agencies, which are seeking more detail over how the technology is employed in areas such as fraud prevention, personalised customer services and credit underwriting.

The agencies are seeking feedback from across a wide spectrum of stakeholders including consumer groups and trade associations. More specifically, the agencies are asking about specific areas of AI use such as for machine learning, appropriate governance, risk management, and controls over AI, challenges in developing and managing AI, and whether clarifications over its use would be helpful.

In their request for information (RFI), the agencies said they support responsible innovation by financial institutions providing it includes the identification and management of associated risks.

They noted that financial institutions are using a range of AI technologies for enhanced decision making. These include flagging unusual transactions, making credit decisions. risk management, textual analysis [analysing unstructured data] and cyber security.

Enhancements and concerns

The agencies recognise that AI can improve efficiency and enhance the performance of financial institutions and bring customer benefits. The technology can reveal relationships among variables that are not otherwise intuitive and help process large data sets.

However, the agencies are clearly concerned about AI-related risks. In the RFI document they note that some of the risks are not unique to AI, such as being potentially vulnerable to technology lapses, model failures and cyber threats, which could endanger a firm’s financial soundness.

The agencies are clearly concerned about AI-related risks... such as being vulnerable to technology lapses, model failures and cyber threats

More specifically, the regulators are concerned that poorly supervised AI programmes could promote unlawful discrimination, unfair, deceptive, or abusive acts or practices.

Another area of concern is over the lack of explainability of the outcomes produced by some AI approaches – often called black boxes.

The RFI goes on to raise questions about AI algorithms identifying patterns and correlations from data sets without human intervention and how that data is updated.

The five agencies are the Federal Reserve System, the Consumer Financial Protection Bureau (CFPB), the Federal Deposit Insurance Corporation (FDIC), the National Credit Union Administration (NCUA) and the Office of the Comptroller of the Currency (OCC). The comment period is open for 60 days following publication in the Federal Register.

This article first appeared in The Banker’s sister publication Global Risk Regulator.

PLEASE ENTER YOUR DETAILS TO WATCH THIS VIDEO

All fields are mandatory

The Banker is a service from the Financial Times. The Financial Times Ltd takes your privacy seriously.

Choose how you want us to contact you.

Invites and Offers from The Banker

Receive exclusive personalised event invitations, carefully curated offers and promotions from The Banker



For more information about how we use your data, please refer to our privacy and cookie policies.

Terms and conditions

Join our community

The Banker on Twitter