Share the article
twitter-iconcopy-link-iconprint-icon
share-icon
RegulationsDecember 5 2023

Capital markets face regulatory scrutiny over AI use

Financial regulators have artificial intelligence in their cross-hairs as the technology’s use rapidly expands within the capital markets ecosystem. 
Share the article
twitter-iconcopy-link-iconprint-icon
share-icon
Capital markets face regulatory scrutiny over AI useImage: Getty Images

In July, the US Securities and Exchange Commission (SEC) released proposals mandating that broker dealers and investment advisers manage potential conflicts of interest arising from predictive data analytics. 

Once finalised, these regulations will compel firms to scrutinise artificial intelligence (AI) usage in investor interactions, ensuring that investors’ interests are prioritised over those of the firm. 

However, US capital market participants are expected to face stricter regulatory and compliance checks on their use of AI before these rules are embedded.

The SEC’s Division of Examinations listed AI among its priorities for 2024, and has established specialised teams to ensure AI use in the capital markets complies with federal securities law, according to research from the law firm Foley & Lardner. 

The division’s priorities state that it “remains focused on certain services, including automated investment tools, AI and trading algorithms or platforms, and the risks associated with the use of emerging technologies and alternative sources of data”.

This comes as SEC chair Gary Gensler has warned of the financial stability risks linked to AI’s horizontal use by different financial market participants and its potential to trigger a new crisis through “herd behaviour”. 

Significant changes

Adriano Koshiyama, the co-founder and co-CEO of Holistic AI, an AI governance, risk, and compliance software platform, says 2024 will deliver significant changes around the future of regulation.

“This year was the year of AI awareness,” he says. “I think next year will be the year of debate around regulation. We’re probably going to see the first lawsuits, the first fines, and the first big cases in the world of AI. That will drive a lot of the conversation.”

The complexity of some AI models, including deep learning systems and newer subsets such as generative AI, poses a challenge for financial regulators. 

Understanding AI predictions and decision-making processes is chief among these difficulties, particularly concerning firms consumer-facing and investment functions. 

Mr Koshiyama co-authored a September 2023 Bank of England working paper to investigate these and other questions. The authors found that deep-learning models in finance reach similar conclusions or predictions, but how these models explain their ‘thinking’ varies widely. 

“We are still unsure whether these AI models could [destabilise] the financial market,” he says. “If you’re a regulator looking at this, you will probably have some transparency concerns.”

Recent regulatory activity

These and other issues are prompting a flurry of regulatory activity. 

Earlier this year, Canada’s Ontario Securities Commission published joint research into AI use in the country’s capital market with EY, concluding it was at “an intermediate stage” and that capital market regulators had a vital role to play in its responsible development. 

In November, the Monetary Authority of Singapore (MAS) announced a new generative AI risk framework for the financial sector known as ‘Project MindForge’.

The MAS’s work, including a whitepaper to be published in early 2024, will focus on the banking sector before working with capital market participants and insurance groups. 

The capital market sector is better placed than most to deal with these changing regulatory winds; the industry has been at the forefront of AI development for years. This has led to pioneering AI development and deep expertise in the broader sector.

Global financial juggernaut Northern Trust, which offers asset servicing, investment management and wealth management services, is experimenting with generative AI models in a sandbox environment. This is helping the firm understand the full spectrum of use cases — and risks — posed by AI in different business scenarios. 

Alvin Chia, Northern Trust’s head of digital asset innovation for the Asia-Pacific, says early feedback has been promising.  

“Based on our conversations with portfolio managers, the sentiment around generative AI is very positive. We believe it will leave a significant footprint on the capital markets industry in the years ahead.”

Alongside this experimentation, Mr Chia’s team keeps abreast of changing regulatory standards.

“We are using information shared by regulators worldwide as early guiding principles. This is helping us to evaluate risk and remain compliant in the age of generative AI,” he says. 

The MAS is a case in point: in 2018, it issued its “fairness, ethics, accountability and transparency” guidelines for financial services firms engaging with AI and data analytics. 

But as a new generation of AI models promises to reshape capital markets, global regulatory guidelines will rapidly transition into a much more stringent set of rules.

Was this article helpful?

Thank you for your feedback!

Read more about:  Digital journeys , Regulations