A hand composed of circuitry shaking a human hand above a gavel.

Image: Getty Images

Artificial intelligence has long been a board-level conversation for banks, but when the EU approves its AI Act this year, it will become an issue, write Lorna Doggett and Carolyn Sullivan of law firm Eversheds Sutherland.

The EU is expected to approve its long-awaited AI Act in March, introducing new rules for, among others, lenders in the EU making credit decisions based on machine learning or AI-based algorithms. The ‘machine learning’ aspect of this casts a very wide net over almost any decision made using data, a key point that should see banks sit up and take note.

After the act started its journey through the EU membership back in April 2021, we now have the near-final draft. It is anticipated that the act will have a two-year grace period, although this is not long considering that any AI-related initiatives currently under development by banks will need the act’s requirements already baked in.

If boardrooms don’t discuss the regulation in the coming months, they should take note of its extraterritorial reach, its broad definition of AI and its eye-watering fines of €30m or 6% of worldwide turnover.

Organisations are increasingly relying on AI technology, and regulators in the UK and EU are turning their attention to it in order to recognise its benefits while instilling confidence in individuals that its increasing use is being deployed appropriately and lawfully.

While the UK regulator, the Information Commissioner’s Office, has recently published its own guidance on the use of AI, the EU’s version will capture more than what we tend to think of as AI – and it has extraterritorial reach.

This means that providers placing AI systems in the EU market (regardless of whether the provider is in the EU) or where the output produced by the system is used in the EU, fall within the act’s scope. Non-EU providers of AI systems must take note. Much like the UK’s General Data Protection Regulation, this act has potential for global impact. 

When the EU votes on the act by the end of March 2023, it wants it to not only address policies and governance around the deployment of AI tools, but also the ethical and moral questions raised by AI technologies in certain areas. For example: how can AI help banks remove human bias from decisions about who can borrow money and on what terms, or at what rates?

The AI Act is taking aim at three risk categories. First, prohibited AI practices, such as social scoring or subliminal techniques to distort behaviour, are banned. Second, high-risk AI systems, such as AI-based decisions for access to education or CV-scanning tools to filter candidates, are subject to specific legal requirements. Third, AI tools that pose a low or minimal risk, such as chat-bots, which, apart from certain transparency obligations, are largely left for existing laws to regulate.

High-risk AI systems

AI systems used by some larger lenders for evaluating an individual’s creditworthiness, or giving credit scores, fall into the high-risk category. This is why the AI Act is so relevant to the banking sector.

However, there are exceptions made by the AI Act to differentiate between big players and start-ups or small companies, to place less of a burden on them.

Requirement to eliminate all bias

One key aspect of the act is its approach to ethics, which demands that bias should be prevented from creeping into AI as a result of the data on which it is trained.

Applying this to the banking sector, the use of AI by credit institutions to inform lending decisions or approve credit is nothing new, and neither is the risk of bias or prejudices creeping into the output of those systems.

A 2021 investigation by The Markup of US mortgage applicants found that 80% of black applicants, along with 40% of Latino applicants, and 70% of Native American applicants, were more likely to be rejected for a loan than their white counterparts with similar financial characteristics.

Banks in the UK have already been warned by UK financial regulators about the use of AI in loans. However, the introduction of the AI Act means that lenders won’t just face a potential ethical and reputational risk from use of AI tools, but a tangible regulatory and financial one too.

The act also builds on the legal prohibition of credit institutions making biased decisions related to certain “protected characteristics” (such as age, gender or race). It asks providers of AI systems to look back at their tools and ensure the decision outputs are free from any bias – not just those relating to a protected characteristic.

This has far-reaching consequences for providers of AI tools.

In a paper, professor of technology and regulation at the Oxford Internet Institute, Sandra Wachter, points out that there are numerous algorithmic groups that AI tools have been using for some time to make important decisions, including decisions on credit or mortgage applications.

The paper lists examples such as “dog owners, sad teens, video gamers, single parents, gamblers, or the poor” as categories that have long been the subject of such decision-making. The AI Act requires providers of AI technology to ensure that the risk of “biased outputs” is reduced even for these characteristics. 

This means lenders who may have developed sophisticated machine learning to analyse individual spending habits against a person’s hobbies, interests and educational background will now have to consider all the potential biases attached to those algorithms.

Obligations of “providers”

There is additional risk for the banks that are also “providers” of AI. They have to monitor and keep records on the high-risk AI system for causing “risk” to consumers.

Where such a risk is identified, providers must immediately inform the national competent authorities of the member states in which it made the system available.

Separately, providers of high-risk AI systems must report any serious incident or any malfunctioning of those systems (which constitutes a breach of obligations under EU law intended to protect fundamental rights) to the market surveillance authorities no later than 15 days after the providers become aware of the serious incident or of the malfunctioning.

In sum

Several US states already have AI laws (either in draft form or in force) and the UK is making progress on its own AI framework, with the likelihood being it will borrow ideas from the EU AI Act. This means that even banks and firms not in the EU will either come under by the act’s extraterritorial reach or be caught by something else in time.

Whatever direction this goes, we cannot deny that AI is a global phenomenon. Most organisations already use AI in the form recognised by this act, particularly those in the banking sector. Banks and financial institutions will have to ‘bake in’ this regulation to any AI they are currently delivering, developing or even dreaming to stamp out bias from the start.

The alternative is to risk testing the mettle of the EU’s next regulatory reach and how much of the €30m fine they are willing to impose.

 

Lorna Doggett is the legal director and Carolyn Sullivan is an associate at law firm Eversheds Sutherland. Both work on data privacy across the financial services industry.

PLEASE ENTER YOUR DETAILS TO WATCH THIS VIDEO

All fields are mandatory

The Banker is a service from the Financial Times. The Financial Times Ltd takes your privacy seriously.

Choose how you want us to contact you.

Invites and Offers from The Banker

Receive exclusive personalised event invitations, carefully curated offers and promotions from The Banker



For more information about how we use your data, please refer to our privacy and cookie policies.

Terms and conditions

Join our community

The Banker on Twitter