Share the article
twitter-iconcopy-link-iconprint-icon
share-icon
Editor’s blogApril 11 2023

Banks need to do more to ensure responsible AI use

AI’s ability to transform how the world works, consumes and learns is highlighted by applications like ChatGPT. In financial services, transparency around the use of such technology is key to protecting customers and keeping their trust.
Share the article
twitter-iconcopy-link-iconprint-icon
share-icon
Banks need to do more to ensure responsible AI use

The hype around artificial intelligence (AI) has skyrocketed since the launch of ChatGPT, the chatbot from OpenAI. In just two months, ChatGPT was estimated to have reached 100 million monthly active users, with wide-ranging use cases including writing essays, debugging code and composing music.

Such a leap in functionality and adoption prompted leading lights in the technology industry to call for a ‘pause’ in the development of powerful AI systems.

On March 22, the non-profit organisation Future of Life Institute published an open letter urging AI research facilities to put a stop to the creation of systems that can match human intelligence. More than 50,000 industry figures — including CEO of SpaceX, Tesla and Twitter Elon Musk; Apple co-founder Steve Wozniak; and Chris Larsen, co-founder of Ripple — have added their signatures to halt the training of models larger than GPT-4, the newest version of OpenAI’s language model system.

Their concern is the profound risks that advanced AI poses to society and humanity if left unchecked and unregulated, and they are calling for the development of robust AI governance systems that include regulatory authorities dedicated to AI, liability for AI-caused harm and well-resourced institutions for coping with the economic and political disruptions that AI will cause.

Following the open letter’s publication, Unesco has called on governments to enact a global ethical AI framework.

Many governments and regulators have already begun working on AI legislation. The European Parliament, for example, is set to vote on its draft AI Act by the end of April, with adoption expected by the end of the year. On March 29, the UK government launched a whitepaper to guide AI usage in the UK, which included five principles for regulators to follow to best support customers’ experiences.

While not on the bleeding edge of developing powerful AI systems, banks have already been using the technology for many processes, including customer authentication, fraud detection and risk modelling. For years, many have been debating and developing ethical frameworks and internal programmes to address responsible AI, as they recognise the profound impact the technology could have on people’s lives, particularly in areas such as credit scoring and lending.

They also understand the importance of transparency and explainability in their decision-making process to safeguard customer trust.

However, according to a recent report from independent intelligence platform Evident, banks across North America and Europe are failing to publicly report on their approaches to responsible AI development. Its research found that eight of the 23 largest banks in the US, Canada and Europe currently provide no public responsible AI principles.

The research found a lack of transparency around how AI is already used — and how it may be used in the future — which, it stated, could damage stakeholder trust and stifle progress. The report also pointed to the lack of a standard for responsible AI reporting.

According to the report, only three banks — JPMorgan Chase, Royal Bank of Canada (RBC) and Toronto-Dominion Bank — have a “demonstrable strategic focus” on transparency around responsible AI. “Each showed evidence of creating specific responsible AI leadership roles, publishing ethical principles and reports on AI, and partnering with relevant universities and organisations,” the report stated.

In addition to assessing transparency, Evident also analysed banks against three other AI areas: talent, innovation and leadership.

JPMorgan Chase has topped the Evident AI Index across all four index pillars, with success based on strong leadership and sustained investment over half a decade. RBC ranks second, with a particularly strong performance in innovation and transparency. Citigroup comes in third, also performing well in innovation.

North American banks are typically ahead of European institutions, occupying seven of the top 10 spots in the index. The clear European winner is UBS Group, in fourth place, with ING and BNP Paribas also placing in the top 10.

Approaches to hiring AI talent also differ regionally. North American banks are more likely to hire specific responsible AI roles, usually from big tech firms, and European banks tend to lead responsible AI within their data ethics teams.

According to the report: “All banks will need to improve transparency around the work they are doing to stay apace with the growing public awareness of AI and its potential societal impacts. Ultimately, those that lead on transparency will be able to redefine how they are perceived by society, create competitive advantage and better seize the opportunities AI presents.”

Joy Macknight is editor of The Banker. Follow her on Twitter @joymacknight

Register to receive the Editor’s blog and in-depth coverage from the banking industry through the weekly e-newsletter.

Was this article helpful?

Thank you for your feedback!