Share the article
twitter-iconcopy-link-iconprint-icon
share-icon
ArchiveMarch 6 2006

QUANTUM LEAP- Trading Faster Than the Speed of Light

Bank trading desks are continually seeking faster data processing to speed up deals and this is pitting technology developers against the laws of physics. Dan Barnes reports.In the big money world of trading, patience is definitely not a virtue. Traders who miss deals by milliseconds, or are outwitted by a competitor’s superior data, quickly become history. That is why they demand the best from their IT departments and do not take ‘no’ for an answer.
Share the article
twitter-iconcopy-link-iconprint-icon
share-icon

However, increasingly it is not the translation of business needs into reality that the traders need to contend with but the laws of physics.

Bank trading desks now demand such huge quantities of data and require that it be processed at such fantastic speeds that in some cases they are rubbing up against current scientific limits.

“However much power we give the quants, they will use it all up,” says Barry Childe, head of central grid systems at Royal Bank of Scotland. “In a previous role, I increased the compute power on a system by 40 times. My boss was impressed and said: ‘Great, when can I get 400 times?’”

Things have become so acute that it has got to the stage where the physical proximity of the trading floor to the exchange has now become an issue. The nano-second extra that a remote trade takes to register could cost a bank the deal. The idea that advances in computing and electronics would make trading equally profitable from anywhere in the world has now been turned on its head.

Fortunately, the big IT companies are rising to the trading challenge and have already come up with a number of innovations that, if successfully developed, will transform the trading operation. Given that the scientists are battling against the laws of physics, many of their proposals are highly advanced and difficult to grasp.

One of the straightforward ways of increasing processing power is to use more processors; rather than speeding up tasks, more of them can be done at the same time. ‘Multi-threading’ is the process of dividing a computer program into lots of separate tasks to be performed simultaneously as a way of speeding things up. By running these tasks across different computer processors – for example across a multi-core chip (see definitions) – significant advances can be gained, especially when many processors are included in a single chip. “Multi-threading and multi-core are the best ways to get more juice out of your applications,” says Richard Baker, European marketing manager at chip manufacturer AMD.

New avenues of power

More power can be found by exploring other industries for new avenues. In cryptography and telecommunications, chips that can be programmed to act as computer programs have been used for years and yet banks are only just beginning to look at them. Known as field programmable gate arrays (FPGAs), they can increase processing power by up to 1000 times for a dedicated application. By being adjusted to specialise in a particular task, they essentially save time and power that would be expended on translating a program’s logic onto a normal chip. That banks are only now adopting a technology that has been in commercial use for more than 15 years is telling of the pressure in the industry.

At HP Labs, scientists are working on solving the problem of making chips with more (and therefore smaller) transistors so that their processing power can be increased. In trying to create smaller transistors, they are running up against molecular barriers and discovering huge shortcomings in using silicon – for many years the standard material for chips – for this purpose. HP new generation chips that will dramatically increase processing power are still in the laboratory stage of production but there are plans for mass manufacturing.

“At the moment, they are lovingly made by sharing and caring PhDs,” jokes Phil Kuekes, a member of the technical staff at HP Labs. “But they could be manufactured on an industrial scale and form the processor for trading platforms of the future.”

One of the most astonishing innovations is the use of ‘photonic crystals’ to transfer data using pulses of light. A chip built at University of Texas by professor of electrical engineering Ray Chen exploits a method of manipulating light to make it an effective way of transferring information – think Morse code using a torch compared with writing and sending a letter. Bursts of differing levels of light are potentially super quick, low power and almost heatless compared with pushing electrons around metal circuits. Like the HP solution, this is at the laboratory stage.

The speed barrier

How did trading arrive at this situation where IT labs are having to work overtime to keep the systems on track?

At one time, Intel co-founder Gordon Moore’s prediction that processing power would double every 18 to 24 months was something that the industry took in its stride. Despite many predictions of its imminent demise, it held true, earning the epithet ‘Moore’s Law’. Last April, the 40th anniversary of Moore’s Law was heralded by Professor Moore’s comment to a press conference that within a couple of generations of chips, their construction would be dealt with at the atomic level and that is a “fundamental barrier” to exponential growth. However, the human reaction to any law is to find a way around it – especially if it stands in the way of a tidy profit.

cp/15/p22Moore, Gordon.jpg
Gordon Moore: Predicted that the number of transistors on a chip would double every 24 months and that chip construction would have to be done at the atomic level for growth in processing power to continue

The financial services industry’s response to a bi-annual doubling of processing power has been to use it and ask for more. To get competitive edge over other traders, investment banks and funds have sought to create increasingly complex and clever computer programs that can outwit one another and any humans that care to trade against them. To evolve the smartest algorithms takes a huge amount of testing, risk assessment and therefore number crunching. Putting a couple of thousand algorithms through 100 years of historical data to assess their individual suitability to go to market is not a task for a desktop PC.

Consider that this activity is all conducted prior to them hitting the market. When they are put into action with all of their competition creating huge numbers of relatively small trades, they are adding processing burden to exchanges, data providers and other market participants as well as their sources of origin, as Robin Paine, chief technology officer at the London Stock Exchange, notes.

cp/15/p20Paine, Robin.jpg
Robin Paine: Algorithmic trading systems are driving growth in trading

“Over the past five years, we have seen 44% composite annual growth in the number of trades per day on SETS [the computerised stock exchange trading system]. One of the driving factors for that growth has been the proliferation of algorithmic trading systems.”

The complexity equation

If a product is too complex to be traded via automated systems, a huge amount of compute power is put into its very creation. Exotic products can be incredibly profitable and the race to get them to market ahead of the competition keeps the back office wheels spinning night and day. “You are looking at some pretty ‘bulky’ applications to get exotic products fit for market,” says Mr Childe. “It used to be the case that structured products were put together by the Tier 1s and the big credit shops but, as vendor platforms are giving greater capacity to Tier 2 and Tier 3 banks, there is alot more adoption of advanced structured products than there was a few years ago.” With this competition snapping at the heels of the pack leaders, there has been no let up in pace.

The infrastructure necessary to support these processing requirements consists of two parts in hardware terms. First, all of the information has to be processed quickly and accurately. Second, the information has to be passed between the market players as rapidly and reliably as it can be. Taking the first part, this is in effect the engine of an investment bank. The direct results of accurate and reliable processing are successful products and strategies. These do not exist in a vacuum but must be held relative to those produced by the competition. For that reason the speed and volume of number crunching is crucial. In the words of one investment bank’s chief information officer: “Scale, volume and performance are differentiators – some people do it better than others.”

Although often referred to as ‘keeping the lights on’, implying a slightly mundane activity, losing this capacity can be incredibly damaging, as the Tokyo Stock Exchange recently found out when capacity on its trading system (installed just over a year ago) was reached before end of day in January and the exchange was forced to halt trading.

How can this pace be maintained if Moore’s Law cannot be sustained? Looking at the law in more detail, Professor Moore’s actual prediction was that the number of transistors on a chip would double every 24 months. Inherent in this was the development of smaller transistors, meaning that twice as many could be fitted onto a chip. Because transistors are a basic measure of computing power, the extrapolation from Professor Moore’s original quote has been that computing power would double every two years, later changed to 18 months.

Herein lies the problem. Physically, an object’s scale cannot be reduced infinitely because at some stage it hits the molecular level. This has put pressure on the research and development teams working to increase the number of transistors on a chip, and sustain growth of computing power.

The multi-threading solution

The most straightforward solution has been increasing the number of processors used – the equivalent of adding extra cylinders to an engine to make it more powerful. If a central processing unit (CPU) is the engine of a computer then dual-core and multi-core chips are engines with extra cylinders. By using a single interface to the data and memory that are needed to run a computation but then dividing a program into parts that are run simultaneously across the processing cores (referred to as multi-threading), much faster processing times can be achieved.

Giving a generic concept of the performance improvement this can provide is hard, says Mr Baker, because “the benefits are very application specific”. However, RBS’s Mr Childe notes that although earlier methods of dividing a program and running typical floating points operations (a way of encoding large number) over only one hyperthreaded chip improved processing by 11% to 12%, “we’ve seen 80% to 90% improvement with dual core”.

Software is being tailored to make best use of dual-core chips. For example, operating systems such as Windows and Linux are being developed with these chips in mind. However, this is still not pushing the full potential, says Mr Baker. “The overheads on these programs are still high. They have a lot of functionality that has built up over their years of development and that is not needed by everyone.” As software is improved, dual-core has further potential to provide increased processing power for many years to come.

DEFINITIONS:

  • Application logic: outlines the way in which an application runs
  • Chip: a microprocessor consisting of miniaturised transistors on an integrated circuit. More than one processor on a single circuit is referred to as a dual-core or multi-core chip
  • Transistor: A ‘switch’ that is either on or off

PART TWO (SOFTWARE SPEED)

Was this article helpful?

Thank you for your feedback!