Share the article
twitter-iconcopy-link-iconprint-icon
share-icon

The answer to generative AI’s success in financial services

Banks are increasingly dependent on a technology that has some way to go to prove itself as dependable within financial services. Bill Lumley reports.
Share the article
twitter-iconcopy-link-iconprint-icon
share-icon
The answer to generative AI’s success in financial servicesImage: Getty Images

Banks worldwide are exploiting the potential of generative alternative intelligence (AI) and machine learning (ML) to help with functions including risk management, fraud detection, user engagement and knowledge retrieval, but concerns remain that biased data could be putting financial institutions at risk. 

Generative AI is still in its early stages and therefore continues to present the risk of critical errors and biased outputs, according to Rony Shalit, vice-president for compliance and ethics at Bright Data, a web data platform. “AI can be very useful in clear compliance cases, but in unexplored territories, it is up to the compliance team to evaluate the new risk and set mitigation activities,” he says. 

“Although AI can assist in gathering the required information, the final decision is more often than not based on the organisation’s risk appetite and management experience.”

US digital bank Zenus Bank, which acquired an international financial entity banking licence from Puerto Rico, offers US bank accounts to clients in countries around the world. Gabriel Viera, chief compliance officer, says AI and ML present financial services organisations with advantages from improved customer experience to increasing operational efficiency and risk management. 

The bank uses face and voice recognition biometrics to authenticate client transactions. Its online banking app uses this technology in combination with AI when reviewing accounts and validating transactions in real time. 

Mr Viera says: “We look to address several compliance-related challenges such as anti-money laundering and know your customer, fraud detection and prevention, transaction monitoring, regulatory reporting and compliance documentation, as well as risk assessment and management using AI for our cross-border banking services.”

But, he warns, integrating AI in the banking industry may present specific challenges concerning regulatory audits, cyber security and privacy. “Banks must ensure compliance with industry regulations, address data privacy concerns and mitigate potential biases in AI algorithms,” he says.

“Robust cyber security measures are essential to protect against cyber attacks targeting AI systems. Maintaining human oversight and training, along with ensuring AI integration with existing systems, is critical for successful implementation. Additionally, resource constraints and interoperability issues may arise, requiring careful planning and collaboration to maximise AI’s benefits while safeguarding data and adhering to regulatory requirements.”

Without validated and reliable data, AI is unusable

Rob Houghton, founder and CTO at Insightful Technology, says AI is being used by financial services companies for functions ranging from transaction-monitoring software to workforce surveillance solutions. “Regulators are trying to respond, but it’s moving so fast this is almost impossible,” he says. “As a result, there’s a lack of clarity over what’s compliant. With vague rules, it’s easy for authorities to turn the rules to their favour.

“The only way to avoid penalties is for institutions to have complete transparency. Every decision AI makes and every process it automates must be recorded and explainable. This provides a full audit trail, showing the workings of the software,” he explains.

Evgeny Likhoded, president at regulatory risk intelligence firm Corlytics, says: “The accuracy, rigour and transparency of the way your models operate for your clients is very important in compliance. If a model is not precise enough, then it’s very difficult for them to adopt these technologies within their organisations. So they’re looking for assurance and accuracy,” he says.

Model training

Central to the issue is the way in which developers train AI models. They understand that organisations are increasingly looking to industrialise their AI models, and investments in technology must be able to demonstrate a strong return to warrant pursuing them. 

Ensuring properly trained models is therefore crucial for successful implementation of AI-based use cases, according to Jon Payne, director of sales, engineering and education at creative data technology provider InterSystems.

“The bottom line is that AI models necessitate clean, timely data from a variety of data sources, in order ultimately to become a reliable basis for decision-making. The more high-quality data a model consumes, the more finessed the training of the AI models. For example, in the case of a financial services firm, they’re looking to source both internal and external data from market data providers, news feeds, risk data and historical data as well as real-time operational data,” says Mr Payne.

Without validated and reliable data, AI is unusable, agrees Daniel Doll-Steinberg, co-founder of active innovation hub EdenBase. “On the financial services side, combining AI and data tokenisation will significantly improve transparency, auditing and tracking of transactions, whether they be on an exchange, OTC [over the counter] or even peer-to-peer,” he says. 

AI should be used as a tool to empower professionals and to ease the process of regulatory compliance in tandem with humans

Gabriele Oliva

“It will enable banks to make limits easier to administer and herald real-time oversight, allowing regulations and specific corporate and sector rules to be encoded directly into the assets themselves. This will ensure that any regulatory changes and corporate announcements are identified and enforced immediately, and human error minimised.”

Indraneel Basu Majumdar, senior financial services solicitor at business law firm Harper James, says: “There are risks with the use of the technology, namely that the outcomes are only as good as the AI programming and the data sets, and incomplete or inaccurate data sets may result in incomplete outcomes. 

“Further, some of the nuances in compliance decision-making, which call for judgement, may require humans to be involved for liability reasons and to avoid the risks of bias in decision-making.”

Antonina Burlachenko, head of quality and regulatory consulting at global digital consultancy Star, says clients with ML solutions are paying close attention to AI regulations and are building procedures and documentation that help them to demonstrate a controlled and trustworthy approach to AI development. 

“They focus on each stage of the ML model lifecycle, starting with inception, data acquisition and pre-processing in order to ensure data quality; model training, tuning and validation; and post-market surveillance,” she says.

Gabriele Oliva, head of data analytics at data company Bip xTech, stresses that AI is not foolproof, requires contextual, business-specific detail and is dependent on the quality of its data sources. “An AI solution must in fact be cautiously designed and tested to ensure it is reliable and ethical, and doing so necessitates carefully training it to suit its specific environment while avoiding biases influencing its output,” he says. 

More on generative AI 

“Ultimately, AI solutions cannot always guarantee a level of precision required by compliance, nor can they replace human judgement grounded in sensitivity and experience. This shouldn’t discourage companies from using AI. It should be used as a tool to empower professionals and to ease the process of regulatory compliance in tandem with humans,” he concludes.

 

Was this article helpful?

Thank you for your feedback!