Share the article
twitter-iconcopy-link-iconprint-icon
share-icon
Editor’s blogFebruary 2

The risks of ‘hallucinating’ financial reports

The use of generative AI in how companies share information with investors and other stakeholders creates new risks, which banks should not ignore
Share the article
twitter-iconcopy-link-iconprint-icon
share-icon
The risks of ‘hallucinating’ financial reportsImage: Carmen Reichman/FT

Towards the end of last year, KPMG surveyed a couple of hundred “decision makers” in the US about their use of artificial intelligence. The consultancy’s questions were addressed to chief financial officers, heads of financial reporting, and other senior executives overseeing accounting and auditing work at companies with more than $1bn in revenues and at least 500 employees. 

In addition to sharing the views (perhaps unsurprisingly, and likely reassuring for the survey’s authors) expressed by the majority of respondents that a “third-party attestation will be valuable” when using AI in financial reporting, the study also shows how generative AI — the more creative part of the technology — is a priority for 40 per cent of professionals.  

Generative AI isn’t necessarily a “top priority” in this area (cloud migration, data and analytics, and robotic process automation are front of mind here) but it is still the highest among the less urgent concerns.

For banks, what happens in this space is arguably of greater importance than for companies in other industries, as the application of generative AI in financial reporting could influence both lenders’ ability to attract capital and their understanding of clients’ financial health. 

In both cases, what happens if the technology gets it wrong? And who would be responsible if the algorithm “hallucinates”? 

In the UK, a far smaller consultancy than KPMG has been sounding the alarm bell. Claire Bodanis, the founder of Falcon Windsor, a minute company offering corporate reporting services, has been knocking at the door of the body regulating auditors and accountants — the Financial Reporting Council — asking for intervention. 

Bodanis is concerned about the risks of misrepresentation and lack of accountability that generative AI could introduce in the highly consequential and scrutinised area of corporate reporting. 

She says others share her worries, including in the financial services sector and across professional bodies, with the UK and Ireland’s Chartered Governance Institute urging corporate boards to prepare for the opportunities as well as the risks being created by the use of AI in reporting. 

Bodanis is also talking to academics. This year, she’ll be working with researchers from Imperial College London and with data science company Insig AI to investigate the influence of AI on “truthful, accurate, clear reporting that people believe”, she says. 

This is an area that can have huge consequences for the safe and smooth running of the financial system. The Banker team has been meticulously reporting on the ways in which AI can influence the provision of financial services (you will find a few recent examples here, here and here). 

We will keep you informed on developments in the use of new technology in reporting too. And, as always, we are keen to hear about this or any other subject affecting finance from our readers too — whether they are part of large, global banks and groups, or small, independent consultants anywhere in the world. Do get in touch.

Silvia Pavoni is editor in chief of The Banker. Follow her on LinkedIn here.

Was this article helpful?

Thank you for your feedback!