AI regulation

Many jurisdictions are following the EU’s proposed regulation of artificial intelligence. Firms that have not begun to develop their AI systems to comply with the new regulation are being advised to do so now.

Banks and regulators worldwide are closely monitoring artificial intelligence (AI) regulation following the recent publication by the European Commission (EC) of its proposal for EU regulation of AI.

This document lays down rules for the development and use of AI systems, with the ultimate objective of a legal framework for trustworthy AI. Penalties for infringement of the regulation, which will kick in two years after the regulation is formally introduced, amount to the higher of €30m or 6% of worldwide annual turnover.

Time to get started

Firms that have not begun developing their AI systems to comply with the new regulation are being advised to do so now. Accenture has been working with a large financial institution for two years, helping the consulting firm to set up its foundations for responsible AI and data ethics, and there is probably still another year’s worth of work to go. Ray Eitel-Porter, Accenture’s managing director applied intelligence and global lead for responsible AI, says: “This isn’t just something that you do in three months.”

Financial services firms will be impacted strongly by the regulation, he warns, but compared with other sectors they have an advantage in that they have very well-developed compliance and risk regimes.

The impact of the regulation is acknowledged by 95% of firms, but alarmingly only 6% are poised or prepared to put their principles into practice, according to an Accenture survey of 850 firms in 17 regions across 20 industries. Most of these see AI regulation as a key business priority, with more than 80% saying they are going to commit 10% of their total AI budget to responsible AI.

“People recognise they’re going to be impacted and they recognise they’re not there yet,” says Mr Eitel-Porter. “There’s a lot of work to be done, but they’ve dedicated budget and effort, and they say this is going to be a key priority over the next two or three years.”

Following EU’s lead

Since publication of the EC’s white paper on AI in February 2020, regulators in various jurisdictions outside the EU have followed suit, according to Charlotte Walker-Osborn, partner at law firm Eversheds Sutherland. For example, she says Australia has confirmed it will largely follow the proposed EU AI regulation, which she expects to come into force in the next year, and she predicts the UK will be about a year behind. “Countries like the US will go about it in their own way, and the Middle East has already followed some of the EU guidance on trustworthy AI,” she adds.

According to an internal briefing document she produced on the proposed regulation, it applies to a wide definition of AI systems that will have extensive effects both on AI and other technology. Creating the platform for AI regulation compliance will therefore be tougher than many financial services firms may have envisaged, Ms Walker-Osborn suggests.

Banks that Eversheds Sutherland has been working with on AI for the past couple of years are fully aware of the issues that need to be addressed, but they are still unclear how to go about this from a technology perspective, she says. Ms Walker-Osborn predicts companies will either try to build this into all their tech procurement systems and procedures or, where it is more cumbersome, they will have to work out quite specifically whether and where to deploy human intervention.

Benefits

The Accenture survey, meanwhile, shows clear recognition of the purpose and benefits of AI regulation, notably the way it will address bias. “AI is always trained on historical datasets and most of these unfortunately tend to have bias embedded in them, based on the way society has operated,” says Mr Eitel-Porter.

He adds that culture is essential to uniting the whole organisation around responsible principles and practices, and that a combination of awareness and ethical culture should be instilled within organisations.

Mr Eitel-Porter says that recommendations in the forthcoming regulation align very closely with what any financial services organisation should do to illustrate best practice for responsible AI, with systems and controls in place to make sure everything is going to work and not result in unintended consequences.

According to an Eversheds Sutherland briefing note, the proposed EU rules will be enforced through a governance system at member-state level, with enforcement powers building on existing structures and a co-operation mechanism at EU level with the formation of a European AI board. It concludes that there will be a delay before the regulation is in its final form, given that there will inevitably be considerable debate over its precise terms.

However, with the EU AI regulations proposals and the extent of their associated penalties now published, industry-wide guidance strongly advises that if you have not seriously begun working on compliance with the spirit of the yet unenforced EU regulations, you should start to do so now.

PLEASE ENTER YOUR DETAILS TO WATCH THIS VIDEO

All fields are mandatory

The Banker is a service from the Financial Times. The Financial Times Ltd takes your privacy seriously.

Choose how you want us to contact you.

Invites and Offers from The Banker

Receive exclusive personalised event invitations, carefully curated offers and promotions from The Banker



For more information about how we use your data, please refer to our privacy and cookie policies.

Terms and conditions

Join our community

The Banker on Twitter