A brain composed of circuitry

Image: Getty Images

Plans to create a trustworthy and responsible environment for AI development within the EU inch closer to fruition. James King investigates. 

The EU’s groundbreaking Artificial Intelligence (AI) Act is inching closer to joining the bloc’s statute book. On April 27, the European Parliament (EP) reached a provisional political agreement on the text, subject to technical-level adjustments and a further committee vote in May. 

The proposed legislation will then face an EP plenary vote, before moving to so-called trilogue negotiations in the second half of the year. As such, the world’s first comprehensive legal framework on AI could be in place before the end of 2023.

At its core, the AI Act aims to create a trustworthy and responsible environment for AI development within the EU. 

To this end, lawmakers have adopted a classification system that categorises different types of AI by risk: unacceptable, high, limited and minimal. For AI models that fall under the two lower risk bands, the implications of the new framework will be limited and include basic issues around transparency.

AI applications that are deemed ‘unacceptable’, such as social credit scoring systems, are banned with almost no exceptions. But for AI systems that are deemed ‘high risk’, including products that pose a risk of harm to the health, safety and rights of users, a much tougher regulatory environment beckons. 

For those in higher-risk categories, the Act will purposefully slow down AI development

Andrew Gamino-Cheong

For starters, these platforms will be subject to rigorous testing, data requirements and accountability frameworks, imposing high burdens on firms that fall within this risk tier. 

“The EU AI Act is meant to provide added levels of governance to organisations developing high-risk applications of AI,” says Andrew Gamino-Cheong, chief technology officer and co-founder of Trustible, an AI governance management platform. 

“For those in higher-risk categories, the Act will purposefully slow down AI development to ensure proper testing on safety, fairness, privacy, and other considerations are taken into account before deploying a model. For lower-risk applications, it should not have much meaningful effect on innovation,” he continues. 

ChatGPT under the spotlight

Meanwhile, generative AI systems, including ChatGPT, will be required to disclose whether or not they have used copyrighted materials as part of their training materials, under a last-minute amendment to the agreement reached on April 27. This alone could have huge implications for the future of AI development moving forward. 

But some of these systems will also be subject to stricter regulatory obligations under the new framework, depending on whether the EU classes them as ‘foundation model’ systems, according to reporting from Euractiv. This designation applies to models trained on large data sets and with a general output capable of fulfilling a range of tasks. 

This means that for financial services firms, the AI Act could present a number of challenges.

For one, AI systems that have access to essential public services, of which the text includes some aspects of the financial services domain, will be classed in the high risk category. This potentially opens the door to higher regulatory obligations being imposed on at least some AI products with a focus on financial technology. 

But with additional oversight likely being applied to foundation model systems as well, the act could weigh down the development of financial services-focused AI systems moving forward. 

These issues, and others, are prompting concerns from some market participants about the broader structure of the legislative proposal. 

“Local, European private sector interests are so far removed from the negotiating process. What we have is like a patchwork of compromises and sometimes a tendency to opt for the lowest common denominator on a particular provision [by lawmakers]. So you end up with this not very helpful framework that is trying to do everything at once,” says Jonas Frederiksen, senior director of EU policy and government affairs at Circle. 

Over the coming months, further amendments will be made to the act as it progresses through the EU’s negotiating architecture. As such, at least some of the potential constraints facing the financial services sector may be addressed. But many of the bigger picture questions over Europe’s push to regulate AI will remain in an era of increased geopolitical competition and shifting economic power. 

“It’s clear that Europe needs to wake up and do something to position itself on exponential technologies. And at the moment, it’s going in the wrong way,” says Mr Frederiksen.

PLEASE ENTER YOUR DETAILS TO WATCH THIS VIDEO

All fields are mandatory

The Banker is a service from the Financial Times. The Financial Times Ltd takes your privacy seriously.

Choose how you want us to contact you.

Invites and Offers from The Banker

Receive exclusive personalised event invitations, carefully curated offers and promotions from The Banker



For more information about how we use your data, please refer to our privacy and cookie policies.

Terms and conditions

Join our community

The Banker on Twitter