As Covid-19 forces banks to ramp up remote identification and customer onboarding, the sophistication of manipulated digital representations could present a serious challenge.

Deepfakes

“Become anyone” is how face-swapping technology provider Reface describes the machine learning and advanced artificial intelligence (AI) algorithms its mobile app uses to swap your face for that of your favourite celebrity or Marvel comic hero.

Using a single photo taken on a smartphone, the app did a pretty good job of replicating my facial features, but anyone who really knows me is not going to be fooled into thinking that is me in the latest Miley Cyrus music video. “Mobile apps [like Reface] have an element of fun, but they are limited in what they can do and it’s hard to imagine how you could weaponise them,” says Giorgio Patrini, founder, chief executive and chief scientist at Sensity, an Amsterdam-based visual threat intelligence company which defends individuals and organisations against deepfakes.

The term ‘deepfake’ refers specifically to the use of deep learning AI algorithms which use data (a photo, video or audio) to generate replicas of that person’s face or voice. The growing availability of the technology used to create deepfakes is ringing alarm bells for those who work in biometric security and identity (ID) authentication, or anyone who relies on these methods to remotely identify and onboard customers.

Deepfakes could spoof systems used by banks to open or log into accounts for banking services that are supplied remotely

Giorgio Patrini, Sensity

As of June 2020, Sensity had identified more than 49,000 deepfake videos online — an increase of more than 330% since July 2019. The most targeted sectors for deepfakes are entertainment (63%), fashion (22%), sport (4.3%) and business (4.1%). In its 2019 The State of Deepfakes report, Sensity highlights deepfake marketplace offerings from “face-swap videos for $30 to custom voice cloning for $10 per 50 words generated”.

With a single picture of someone taken from their social media or private collections, Mr Patrini says deepfakes could be used for public shaming, to extort money from private individuals or to persuade an individual to hand over the password to a system. “Deepfakes could also spoof systems used by banks, to open or log into accounts for banking services that are supplied remotely,” adds Mr Patrini.

“We’re always looking at emerging threats, and deepfakes are definitely a concern for the [banking] sector,” says Teresa Walsh, global head of intelligence at the Financial Services Information Sharing and Analysis Centre (FS-ISAC), which shares threat information with banks to reduce cybersecurity risks in the global financial system. “The technology to accomplish deepfake fraud attempts has been proven to work; however, as of now, it’s still not a commodity tool for cyber criminals. As this technology develops and becomes commoditised on the black market, we will likely see greater use of deepfakes, perhaps even deepfakes-as-a-service merchants on the underground.”

Malicious intent

It is concerning that deepfake technology is increasingly becoming available for anyone to use, says Stephen Ritter, chief technology officer of Mitek, an ID verification vendor which counts banks like HSBC among its customers. This includes fraudsters who may not have the technical know-how to create deepfake solutions themselves, but have the intent to use it for malicious purposes.

Steve Ritter headshot photo

Stephen Ritter, Mitek

Mr Ritter adds that some of Mitek’s banking customers are very concerned about deepfakes. “It puts their entire business model at risk,” he explains. “Because of the pandemic, the entire industry is going digital and is having to establish a layer of trust with customers without seeing them in person. We’ve created this entire digital economy, which was a lifesaver during the pandemic, but it also gives fraudsters a more scalable way to launch attacks.”

In March 2019, a computer-generated voice was reportedly used to impersonate the chief executive of a German energy firm. The voice was so convincing that the chief executive of the company’s UK operations complied with a request to wire $243,000 to a Hungarian supplier. Two other cases in 2019 involved the use of fake social media accounts using realistic-looking AI-generated photos of people that didn’t really exist.

One of the fake accounts tried to extract information from short-sellers of Tesla stocks, while the other attempted to penetrate networks of US government officials on LinkedIn. “Deepfakes take synthetic ID fraud to the next level,” explains Mr Ritter. “In the past, synthetic IDs were just a collection of data. Now, with deepfake technologies, those synthetic IDs can have faces and voices.”

Onboarding concerns

“Deepfakes are a bigger threat than people think,” says Andrew Bud, chief executive of London-based ID-verification company iProov. Mr Bud says his technical team pranked him during a recent online meeting, using a deepfake of his face and voice, created using open-source tools widely available on the internet. Although the fake did not replicate all of his facial gestures accurately, he believes that the technology is now good enough that it is difficult for the human eye to spot the difference. This throws up numerous challenges for financial institutions, he says, which are moving swiftly to onboard customers remotely. “The problem is if it’s not done correctly, [then] it is a honeypot for money launderers,” says Mr Bud.

Sergey Fedorov, head of customer support performance and onboarding at small business account and tax app, ANNA (Absolutely No Nonsense Admin), says deepfakes caused him a few sleepless nights when he was designing the start-up’s onboarding process for businesses. “It was a significant responsibility to get this right and to make sure our digital ID verification partner was able to deliver,” Mr Fedorov recalls. “Fraudulent accounts are definitely a concern and laundering money — criminals can invent a story, a lie about their business. If you can create a sophisticated deepfake, you might be able to get through someone’s system.”

Fake customer IDs are a huge problem for banks, says Ben Hamilton, managing director of Kroll Business Intelligence and Investigations. Mr Hamilton has investigated several cases involving fake chief executive frauds, and one case where criminals netted $25m after passing a cryptocurrency exchange’s know your customer (KYC) checks using fake IDs and videos. “Criminals have misled financial services firms into thinking they are someone they are not for a long time,” he says. “Deepfakes will simply make it easier.”

For financial institutions, we worry about the use of this technology as the next evolutionary stage in fraud scams 

Teresa Walsh, FS-ISAC

“For financial institutions, we worry about the use of this technology as the next evolutionary stage in fraud scams such as business email compromise (BEC),” says Ms Walsh of FS-ISAC. BEC is when attackers posing as a company executive send an email to the company’s finance department requesting them to transfer money to a bank account they control. “Targeting large groups of individuals using deepfakes would be difficult and expensive,” says Ville Sointu, head of emerging technologies at Nordea Bank. “So, it seems more likely that attackers will focus on invoice and enterprise fraud, as the pay-off is much greater.”

Growing threat

There is no readily available data as to the actual frequency of deepfake attacks on banks or the financial system. Some say it may already be happening and security systems are not detecting it. But while the financial threat from synthetic media or deepfakes may be low, the key policy question is how much this threat will grow over time, writes Jon Bateman, a fellow in the cyber policy initiative of the Technology and International Affairs Programme at think tank the Carnegie Endowment for International Peace.

In July, Mr Bateman published a working paper, ‘Deepfakes and synthetic media in the financial system: assessing threat scenarios’, which outlines 10 speculative threat scenarios in which deepfakes could potentially be used to target banks, stock exchanges, clearinghouses, brokerages, financial regulators and central banks. “Most people don’t think about the financial component of this threat, even though that is where a lot of digital mayhem ends up playing out,” he says.

The paper divides threats into two categories: “narrowcast” deepfakes, focused on manipulating individuals (such as a payroll officer); and “broadcast” deepfakes, disseminated via social channels, which target larger groups (such as investors). In the case of broadcast deepfakes, the paper says they could be used to generate “seemingly credible false narratives”. A persuasive deepfake video released on social media, for example, could depict a bank executive describing severe liquidity problems, culminating in a run on the bank.

Ben Hamilton

Ben Hamilton, Kroll

More extreme scenarios include a “flash crash”, similar to the one that unfolded in 2013 when a state-sponsored Syrian Electronic Army attack on the Associated Press’s Twitter account falsely claimed that an explosion had rocked the White House, injuring former US president Barack Obama. In a matter of minutes, billions were wiped off stock indices and treasury and bond yields fell before quickly recovering. In future, these sorts of attacks could be instigated using deepfakes. But, Mr Bateman adds, it would require a significant amount of work and technical know-how to pull such an attack off.

Could deepfakes lead to a crisis of trust in banks or the wider financial system? “In a healthy financial system, it is unlikely,” says Mr Bateman. “But in a system that is emerging or going through a crisis, the effects could be exacerbated.” In Gabon, in 2018, a suspected deepfake video of the president Ali Bongo Ondimba, who had not been seen in public for some months, added to existing speculation about his ill health. That speculation later culminated in an attempted military coup.

“The broader societal implications of deepfakes are my personal nightmare,” says Nordea’s Mr Sointu. “Ninety percent of people might know that it is a fake video, but it is the 10% that believe it. That could change how decisions are made in a particular landscape. In the investment space, for example, a deepfake video could show Nordea’s chief executive saying something which impacts the bank’s stock price, at least temporarily. It’s a complex picture of influence.”

Market manipulation is the most worrying scenario with deepfakes, says Mr Hamilton of Kroll. “By the time you get the deepfake video into the laboratory and complete the forensic investigation, the damage has already been done.” He says deepfakes could exacerbate “short and distort” crimes where people manipulate stocks by releasing false information on social media, which causes a stock to crash. They then make money on the short. “In the past three years, we’ve worked on a number of these cases. It’s an increasing problem and deepfakes will only make it worse,” he adds.

Deepfakes may be on banks’ radar, but there does not appear to be much of a dialogue within financial institutions about the damage and mistrust they could sow. Mr Sointu says it is not a topic of much discussion within Nordea because the bank does not remotely onboard new customers. “In Finland, the authentication method is dictated by requirements from regulators and the law, which mandates that we have to physically interact with customers when we onboard them,” he explains. “So, deepfakes haven’t really impacted us yet.”

Fighting fire with fire

Most of the scenarios outlined in terms of how deepfakes could be used to attack the financial system are mostly speculative. But in two to three years, as the technology becomes more widely available, that is when the real threat is likely to emerge, says Stephen Topliss, vice-president of market planning for global fraud and ID at LexisNexis Risk Solutions. “[Deepfakes] should be on banks’ radar and they should ensure they have multiple layers of defence,” he says.

With biometrics (facial or voice recognition) emerging as the next technology for ID verification, Mr Topliss says there are going to be ways to exploit that. Some biometric ID vendors have introduced additional defences, such as checks for liveness (which ask whether the person is real), but liveness detection on its own will not be enough, he insists. “If you’re making a payment using a mobile device, in addition to having facial authentication, you may need another layer looking at the full range of digital intelligence that you can gather associated with the event — for example, where is this event coming from. If it’s coming from where the person lives, can you assign a level of trust to that?” he queries.

Mr Fedorov adds that he speaks to ID verification vendors probably twice a week. “They’re all constantly improving their solutions to tackle deepfakes. Data science, analytics, visual recognition — everything needs to be part of a complex and multi-layered system of checks,” he says. Mr Ritter of Mitek says there is a lot that can be done to combat deepfakes. “We can use AI to defeat AI by training machine-learning models to detect deepfakes. When it comes to faces, there are a lot of cues that can be used to differentiate a real face from a virtual one.”

Spotting unusual activities

Franca_Salvati ANNA

Franca Salvati, ANNA

But even if a multi-layered approach to ID authentication does a pretty good job of detecting deepfakes, the banks have little or no control over customers’ systems. “The anti-fraud controls that banks put in place today, such as secondary verification of large sum transfer requests, can help avoid the damages of these deepfakes,” says FS-ISAC’s Ms Walsh. “However, we will need to continue to focus on consumer education as many scams start at the customer level as opposed to the financial institution.”

Dealing with the reputational fallout from deepfakes could be even more challenging than detecting them. The deepfake may be so convincing that, despite being told it is fake, enough people still believe it is real. “The damage from deepfakes will be difficult to erase or recover from,” says Mr Hamilton, particularly if these narratives have immediate consequences. “The challenge for all of us is to become more sophisticated in spotting unusual narratives.”

Deepfakes are something the industry has to work together on, says Mr Sointu. “We need to educate people as much as possible to make sure they are aware of this new type of fraud.” Franca Salvati, ANNA’s fraud and reconciliation expert, adds it is essential those working within fintech or financial services educate themselves and stay up-to-speed on new technologies and innovations. “At the recent Identity Week conference [in November],” she says, “the topic of deepfakes was discussed; conferences are a good place for us to share knowledge.”

Mr Sointu believes governments have a role to play in protecting both consumers and financial institutions against deepfakes. “In Finland there is an ongoing discussion about outsourcing eKYC to banks,” he explains. “But with 15 different eKYC systems in use, there are concerns that could create more points of failure. Increasingly, the thinking is that eKYC should be a state-provided solution and the banks would just connect to that process as some may struggle to implement a robust enough security solution.”

In an era of fake news, figuring out what or who to believe, and where to put your trust, is becoming increasingly challenging. Deepfake technologies are a significant blow to that foundation of trust, which banks have always relied on. Whether they remain the purveyors of trust in the financial system could depend to a large extent on how well they deal with this new and impending threat.

PLEASE ENTER YOUR DETAILS TO WATCH THIS VIDEO

All fields are mandatory

The Banker is a service from the Financial Times. The Financial Times Ltd takes your privacy seriously.

Choose how you want us to contact you.

Invites and Offers from The Banker

Receive exclusive personalised event invitations, carefully curated offers and promotions from The Banker



For more information about how we use your data, please refer to our privacy and cookie policies.

Terms and conditions

Join our community

The Banker on Twitter