Share the article
twitter-iconcopy-link-iconprint-icon
share-icon
Analysis & opinionJanuary 2 2019

Matthew Blake: Financial services must take the lead in defining customer data principles

Customer data has been identified as a major risk facing the financial system, which is why the World Economic Forum (WEF) has been working on the formulation of principles surrounding its use, dissemination and possible leaks. WEF committee member Matthew Blake explains.
Share the article
twitter-iconcopy-link-iconprint-icon
share-icon
Mathew Blake

Using customer data to reach credit decisions. Reliance on machine learning to spot underwriting fraud. Consolidation of transaction data to identify cross-selling opportunities. The explosion in customer data alongside technological innovations has improved financial services providers’ ability to tailor products and create better efficiencies.

But the use of data also poses urgent questions around ethics, fairness and accessibility. In this regard, social media companies such as Facebook provide a cautionary tale. Ultimately, the customer data questions must be answered in full, or they could erode customers’ trust in financial institutions. Conversely, a cautious and diligent approach may offer financial firms a chance to redeem themselves, and give other sectors a blueprint to act towards, too.

Spotlight on customer data

Working with the World Economic Forum (WEF), two years ago senior leaders from the finance world identified the use of customer data – and cybersecurity – as the major risk facing the financial system. Looking at the problems some tech giants faced in 2018, it was a provident perspective. Finance ‘stakeholders’ used 2018 to develop global principles for the appropriate use of customer data. The result is a set of guidelines and recommendations all institutions can use.

It is a very timely initiative. Scrutiny on how businesses should respond to the data challenges will only increase. Those with a proactive approach will win out; those with a reactive one will end up in a situation of ‘too little, too late’, risking legal trouble or losing clients. At Davos at the end of January, the financial services sector has an opportunity to take a leadership role defining what the appropriate use of customer data means. It is one the industry is well advised not to waste.

The expanded use of data holds enormous promise for both consumers and businesses. Customers stand to benefit from new products and services tailored to their individual needs, an enhanced user experience, and a means to access products that historically may have been out of reach. Businesses benefit as data enables better risk management, leads to cost savings from more efficient internal operations, and allows for the development of new and enhanced offerings.

But the growing reliance on digital data also presents consumers and businesses with serious risks. Customers may fall victim to fraud and theft whenever their data is leaked and stolen, their privacy is threatened when their data is used without consent, and instead of gaining enhanced access to the financial system they may find themselves excluded from services due to real or perceived concerns highlighted by big data analytics.

WEF global principles

In response, businesses and regulatory agencies working with the WEF developed a set of global principles to guide the collection, use and sharing of customer data in financial services. These principles acknowledge the regional differences in societal and cultural beliefs particularly around the idea of privacy. They do not attempt to substitute for country- or industry-specific guidance. They offer a tool to business leaders and policy-makers as they try to enable the benefits of data availability while managing associated risks.

What are they? The principles developed by the WEF-convened stakeholder group cover five dimensions: control, security, personalisation, advanced analytics and portability. They also highlight a number of trade-offs inherent in data and privacy policies that stakeholders must understand, many of them carrying important ethical considerations.

Take ‘control’, for example. The group of banks, insurance companies, law firms, technology companies, public sector actors, trade unions and religious bodies working with the WEF determined that companies should be clear about their use of customer data, attain customer agreement to their customer data policies and, where appropriate, seek consent for specific uses.

In short, and at a high level, the group thought that it is the customer that controls their data, not the business. This is an important determination that has been picked up in a number of data regulations that have come into effect over the past year, such as the European General Data Protection Regulation or various data localisation laws.

Individual vs collective risks

Yet not everybody agrees. Critics warn of overprotection of the individual and their privacy, not only because it limits business’ ability to target and market products to them but also because it creates significant challenges for combating financial crime. Effective anti-money-laundering efforts rely on the ability of banks to share data with each other and with regulators. Stringent privacy protections prohibit sharing. Ultimately, what is more important: to protect an individual’s privacy and thus their personal data, or prevent heinous crimes such as human trafficking or terrorism financing?

Or take the principle on advanced analytics: companies should be able to comprehensively test, validate and explain their use of data analytics and models to customers. As companies increasingly rely on machine intelligence to reach decisions, the spirit of this principle again has found its way into regulations and regulatory guidance documents such as the Feat principles published by the Monetary Authority of Singapore (based on fairness, ethics, accountability and transparency).

Customers should have the right to understand why a decision was reached and why the model methodology is appropriate, say proponents of greater transparency. But ‘explainability’ comes at a cost. The very promise of artificial intelligence algorithms is that they can enhance themselves – often in ways that even their creators do not always understand. What then if an unexplainable model – a black box algorithm – leads to outcomes resulting in 80% financial inclusion of a certain population while the explainable model only allows for 60% inclusion? Should ‘explainability’ trump inclusion potential?

A safe pair of hands

The trade-offs tackled by the WEF-convened group are specific to financial services. The larger ethical questions they raise are not. The ethics of data collection, use and sharing touch every industry and every stakeholder group. But given the amount of data that financial services firms possess and handle, and the sensitivity of the information they look after, the industry has the opportunity to take leadership in setting standards. It should embrace that opportunity and it has a good story to tell.

Banks are notoriously conservative. Their risk aversion and natural hesitance to embrace change put them on the backfoot when customers started embracing new and exciting fintech offerings such as peer-to-peer lending or robo-advice. Unable to innovate at the speed at which both big and small techs can, incumbents feared being sidelined.

Some in the industry predicted a future in which traditional banks would be stuck with the low-return, unsexy pieces of the revenue pie while technology companies would cherry-pick the most desirable, unregulated pieces. This scenario did not play out (at least not yet). The move-fast-and-break-things attitude of some technology companies might play well when disrupting retail or the travel market. It becomes a more problematic mantra when taking on people’s retirement savings, life insurance or mortgage.

Watching one’s most personal data being revealed by Equifax, passport and credit card data leaked by Cathay Pacific and British Airways, and user profiles shared with questionable third parties by Facebook, many consumers re-appreciated the often staid but usually reliable financial institutions that hold their savings, fund their student loans and manage their pension assets.

Developing a blueprint

If anything, banks in the past had been criticised for sharing data too reluctantly with third parties. Data aggregators such as Yodlee complained about limitations banks introduced on the data that third parties are pulling and thus hindering the ability of consumers to optimise their finances. Banks responded to the criticism, arguing they were merely protecting their customers from security risks and pointing out that few standards exist for how third parties can use customer data or how they must protect it.

Some fintechs have been criticised for selling data to hedge funds and other investment firms that then harvest transactional data for trading signals. Does a financial institution’s fiduciary duty supersede the customer’s agency of who to share their data with?

Financial institutions have answers to these questions and they must be transparent with their customers. By communicating clearly what data they collect and possess; why they collect it and what they then do with it; and who they share it with for what purposes, financial institutions would deepen trust. They would also position themselves as stewards of their customers’ data, a role more likely to be entrusted to them than to big tech.

Ultimately, the financial services sector may develop a blueprint defining the appropriate use of customer data that could serve as an example and be adopted by other industries. To realise this strategic opportunity, the industry should embrace the principles developed by the WEF stakeholders.

Matthew Blake is head of future of financial and monetary systems and a member of the executive committee at the World Economic Forum.

Was this article helpful?

Thank you for your feedback!