Share the article
twitter-iconcopy-link-iconprint-icon
share-icon

The need for speed

In the fast-moving world of FX trading, speed is of the essence. Delivering the right data to clients at the right time can make the difference between turning a profit and making a loss. William Essex looks at the issues surrounding data latency.
Share the article
twitter-iconcopy-link-iconprint-icon
share-icon

Latency is intrinsic to data. Basic physics dictates that if, say, a trading price has to be moved from a pricing engine to an end-user screen, that movement will take time. “In any electronic system, there are inevitably going to be differences in speed between participants placing quotes on their systems,” says Peter Ho, managing director of IKOS Research.

Does this matter? Potentially, yes. For example, such lags lead to possible arbitrage opportunities in which a trader, or a system-trading hedge fund, spots a discrepancy between two banks’ prices. With tradable streaming data, there is also the opposite risk, in that a trade based on a “dead” price can be cancelled. One banker comments: “If one trade in a basket fails, you end up with a basket case.”

Thus, the obvious downside of latency is bad trades. Mark Monahan, head of global sales at bank-owned FX firm EBS, says: “If a latency occurs in a price feed that’s fuelling a bank’s pricing engine, the prices shown to customers come from a market that’s already in the past.” Hence, that arbitrage opportunity.

According to Paul Chappell, managing director of C-View, there have been clear examples of traders attempting to take advantage of latency, especially in a world that is predominantly moving towards electronic execution. It is, however, a problem that can solve itself. “Blatant attempts to take [such] advantageusually lead to the market-makers slowing down or withdrawing the price-delivery mechanisms,” says Mr Chappell.

1525.photo.jpg

Paul Fox, chief operations officer, Cognotec: ‘Because the market movement speed is critical in the decision to trade, latency is significant’

Technological shortcomings

Robert Large, head of research at Platinum Capital Management, says the straightforward problem of dead prices is easily compounded by technological shortfalls. “If you have streaming prices, but you’re looking at a price that’s not live, by the time you come to hit it, the price might not be really there,” says Mr Large. “And if the customer’s got a slow connection to the bank, he may be seeing a price that isn’t there.”

You don’t need arbitrage to have a dispute with a client whose data supply has let him down, and contractual disputes benefit no-one.

So in more ways than one, speed is of the essence. With a more traditional business model, for example, there is also a potential disadvantage in being second to respond to a request for a quote (RFQ). Mr Monahan cites his company’s in-house research suggesting that in two-thirds of cases, assuming identical prices, the first price returned to an automated RFQ wins the trade.

Martin Spur, head of e-ventures at RBS Financial Markets, agrees that in the e-commerce environment, data latency can be the difference between making and losing money on transactions. “It’s becoming increasingly important as the take-up of sophisticated pricing engines accelerates and the volume going through them increases,” he says. “The speed and quality of the pricing engine is at least a factor in competitive advantage, and at the extreme, it can make the difference between profit and loss with trading with clients.”

The point is, clients like speed. Mr Spur says: “We are using the technology that we have and the speed of our pricing as a point of competition in the market, and have used it to great advantage in escalating our market share.”

1526.photo.jpg

Peter Ho, managing director, IKOS Research: ‘In any electronic system, there will be differences in speed between participants’

Medium-term targets

However, not everybody sees data latency as a problem. Thanos Papasavvas, head of currency at Credit Suisse Asset Management (CSAM), says: “What we tend to do is try and get in for a medium-term trade. If we buy euros at 125, say, our target will be 127, 128, 130 or perhaps 135. If we get in at 125.02 or 124.98, it will come out in the wash, because next time it might work in our favour.” With a medium-term target movement in excess of 300 basis points, nobody’s going to lose any sleep over a couple of ticks.”

But this is a money-management, “relationship” approach to the FX market, with sufficiently long-time horizons to allow discrepancies to come out in the wash. Not everybody trades on that basis. “If you’re in the hedge-fund arbitrage business, in which your target might be a 25bp move by the end of the day, losing a couple of ticks on the trade might be very important,” concedes Mr Papasavvas.

Similarly, Paul Caplin, founder and CEO of financial software provider Caplin Systems, speaks of banks getting data directly and delivering it direct to hedge funds to shave a second or two off the latency. “Since the hedge funds are doing programme trading, that second or two can be extremely valuable,” he says.

An electronic issue

Aside from the issue of who benefits and who loses by delays, data latency is a technology issue. And as technology becomes pervasive, so latency becomes increasingly important. One banker, who uses FX to hedge debt issuance, says: “Latency occurs in an electronic environment, which means that it’s in all our futures.” We’re all going electronic, which is changing market behaviour.

Latency is also, by extension, an e-problem. Says Mr Caplin: “FX trading has moved very rapidly onto the web in the past few years. I don’t think there’s a major FX programme that doesn’t use the web as one of its main routes to the market.”

Latency’s component issues are speed and the consistency of that speed. Where technology enables the fast, accurate delivery of data, and in particular executable streaming prices, the opportunity arises to build a business model based on that delivery.

Once such models have been built, issues arise around whether the data is being updated in “real-enough time” for the trading purpose and whether the intervals between updates are near-constant enough to be built into the traders’ assumptions.

Once high-speed trading becomes viable, that high speed becomes a commercial necessity: your trading customer will be watching the market, possibly watching other screens as well, and in the competitive environment of his trading desk, your primary advantage will be fast, reliable delivery.

Paul Fox, chief operations officer of FX systems provider Cognotec, says: “If you’re streaming an executable price that is good up to $5m or $10m, at any given time you can be hit on that price. Because the speed of market movements is much more critical in the decision to trade, latency is now much more significant.”

New protocol required

That’s enough about the problem – what about the solution? Focusing on the web, Mr Caplin says the technology is there, but the thinking behind it is not a million miles from traditional voice brokerage.

“Every major FX brokerage is providing web-based solutions, but the majority are still RFQ interfaces,” he says. “That’s quite slow. You put a request in, you wait for responses, they come back, you think about it; the latency can be important when the market is volatile.”

You might pick the first price that comes back, and it might be dead by the time you do so. “The traditional web portals have been built on generic web technology, which is not time-critical,” says Mr Caplin.

The competitive web solution, he suggests, is to migrate from a client-pull web protocol such as http, which doesn’t do updates unless you press the “refresh” button, to an update-pushing protocol such as Caplin Systems’ rttp (real-time text protocol), which refreshes itself “instantaneously”. Or you can take the simple approach and just clear the line.

Explaining the thinking behind the recent launch of EBS Live, Mr Monahan is straight-forward. “The shortest distance is a straight line.” EBS Live is therefore a direct XML-based link from EBS’ data engines to the client location that offers “sub-second” latency.

The key point here is the end-to-end direct delivery via a dedicated feed. After all, you don’t even need basic physics to know that if data has to travel down a pipe with a bundle of unrelated analytics and pass through a whole series of nodes on the way, it will arrive at its destination very latent.

Was this article helpful?

Thank you for your feedback!