Share the article
twitter-iconcopy-link-iconprint-icon
share-icon
Emerging technologiesAugust 30 2023

Artificial intelligence as a central banker

Will AI always “sink its own battleships” to achieve its aims?
Share the article
twitter-iconcopy-link-iconprint-icon
share-icon
Artificial intelligence as a central bankerImage: Getty Images

Central banks are rapidly deploying artificial intelligence (AI). The direction of travel seems inevitable, with AI set to take on increasingly important roles in central banking. That, in turn, raises questions about what we can entrust to AI and where humans need to be in charge.

At the risk of oversimplifying, it is helpful to think of the benefits and threats of AI on a continuum. 

At one end, we have a problem with well-defined objectives, rules and finite and known action space, like in a game of chess. Here AI excels, making much better decisions than humans. 

For central banks, this includes ordinary day-to-day operations, monitoring and decisions, such as the enforcement of micro-prudential rules, payment system operation and the monitoring of economic activity. The abundance of data, clear rules and objectives and repeated events make it ideal for AI. Robo-regulators in charge of regtech are a perfect use of AI. 

Such work is now done by professionals with bachelors’ or masters’ degrees. Central banks may first perceive value in having AI collaborate with humans, while not altering staff numbers. However, as time passes, central banks may grow to embrace the superior decisions and cost savings that come from replacing employees with AI. That is largely possible with today’s AI technology.

As the rules blur and objectives become unclear, events infrequent and the action space fuzzy, AI starts losing the advantage. It has limited information to train on, and important decisions might draw on domains outside of the AI training dataset. 

This includes higher-level economic activity analysis, which may involve PhD-level economists authoring reports and forecasting risk, inflation, and other economic variables. These roles require a comprehensive understanding of data, statistics, programming and – most importantly – economics. 

As the rules blur and objectives become unclear, AI starts losing the advantage

While the skill level for such work is higher than for ordinary activities, a long history of repeated research, coupled with standard analysis frameworks, leaves a significant amount of material on which AI can train. Crucially, such work does not involve much abstract analysis. The senior decision-makers might come to appreciate the faster and more accurate reports produced by AI. This is already happening rapidly, for example, with ChatGPT- and AI-overseen forecasting.

In extreme cases, such as when deciding how to respond to financial crises or rapidly rising inflation – events that the typical central banker might only face once in their professional lifetime – the human decision-makers have the advantage. Here they might have to set their own objectives, events are essentially unique, information is extremely scarce, expert advice is contradictory and the action space is unknown. 

In such situations, mistakes can be catastrophic. In the 1980s, an AI called Eurisko used a cute trick to defeat all of its human competitors in a naval wargame, sinking its own slowest ships to achieve better manoeuvrability than its human competitors.

And that is the problem with AI. How do we know it will do the right thing? Human admirals do not have to be told they cannot sink their own ships. They just know. The AI engine has to be told. But the world is complex, and creating rules covering every eventuality is impossible. AI will eventually run into cases where it takes critical decisions no human would find acceptable.

Of course, human decision-makers mess up more often than AI, but there are crucial differences. The former come with a lifetime of experience and knowledge of relevant fields, like philosophy, history, politics and ethics. This allows them to react to unforeseen circumstances and make decisions subject to political and ethical standards without it being necessary to spell them out.

While AI may make better decisions than a single human most of the time, it currently has only one representation of the world, whereas each human has their own individual worldview based on past experiences, suggesting group decisions made by decision-makers with diverse points of view can result in more robust decisions than those made by an individual AI. No current, nor envisioned, AI technology can make such consensus group decisions.  

Furthermore, before putting humans in charge of the most important domains, we can ask them how they would make decisions in hypothetical scenarios and – crucially – ask them to justify them. They can be held to account and be required to testify to the Treasury Select Committee. If they mess up, they can be fired, punished, incarcerated and lose their reputation. None of that can be done to an AI. Nobody knows how it reasons or decides, nor can it explain itself. You can hold the AI engine to account, but it will not care.

The usage of AI is growing so quickly that decision-makers risk being caught off-guard and faced with a fait accompli. ChatGPT and machine learning overseen by AI are already used by junior central bankers for policy work. 

Instead of steering AI adoption before it becomes too widespread, central banks risk being forced to respond to AI that is already in use.

AI will change both central banks and what will be demanded of their employees. While most central bankers may not become AI experts, they likely will need to speak the language of AI – be familiar with it – and be comfortable taking guidance from and managing AI engines.

The most senior decision-makers then must both appreciate how AI advice differs from that produced by human specialists, and shape their human resource policies and organisational structure to allow for the most efficient use of AI without it threatening the mission of the organisation.

 

Jon Danielsson is a director of the ESRC-funded Systemic Risk Centre at the London School of Economics.

In collaboration with the Centre for Economic Policy Research.

Was this article helpful?

Thank you for your feedback!