Grid technology underlies many of the utility models of computing mooted to revolutionise industry. It is being put to use in financial services organisations and in some cases already providing compute power as a utility. Dan Barnes reports.

In the race to come up with the latest collateralised debt obligation (CDO) structure or another kind of derivative, the secret weapon of advantage could be grid computing. Million dollar profits can be earned by banks that reap first mover advantage in derivatives but often they are held back by a lack of raw computing power. Grid computing harnesses unused capacity in a bank’s PCs and servers to get the calculations done rather than relying on existing mainframes or supercomputers. It consolidates and allows a measured co-ordination of computing resource.

“Where you are making a couple of hundred basis points margin on hundred million dollar deals and you are bringing that to market a couple of months early, the technology cost is meaningless,” says one technology provider.

Despite that many banks are resisting change because of their reluctance to ditch mainframes that have served them for decades. In the retail sector, where grid computing can also bring huge benefits to running such things as ATM networks, the uptake is even slower.

The outcome could be that emerging market retail banks that are unencumbered by legacy systems will be among the first players to take advantage of grid computing. Another solution is to buy in computing power from an external provider – the utility model. However, bankers complain that some vendors are not able to provide the service they need in this area.

Forging ahead

One bank that is forging ahead with grid computing is Bank of America (BoA). Like most banks, BoA has used the distributed computing model for a number of years. Andy Sturrock, grid architect at Bank of America, explains: “An overnight batch may involve valuing a portfolio of deals. Typically that is split into individual deals with each deal being sent as a discrete calculation unit. Some deals may be reduced further – for example in Monte Carlo valuations. In this way, the entire computation is distributed over multiple compute engines.”

This simple distributed computing was the forerunner of grid computing, he says. He warns that there is a degree of over-hyping going on in the IT industry and emphasises that grid technology is just “an evolution of distributed computing. The main added-value factor for BoA is the dynamic reprovisioning of resources and sharing those resources between applications. This enables both hardware consolidation and cross-business line co-ordination”.

In with the new

As a potential usurper to the mainframe or supercomputer, grid technology is grabbing the industry’s attention. Dave Hanley, industry leader, financial services at Oracle UK, believes it would be “trite” to say that grid could take the place of a mainframe. Grid “can turn your 64 PCs into a giant mainframe” and there is “clearly an opportunity for banks to look at moving off those platforms”, he says, but the 20, 30 or even 40 years of heritage that some banks have with mainframes makes this development unlikely for established big players.

However, in emerging markets the opportunity is more viable, he suggests. “In emerging parts of Europe, they are refreshing their infrastructures, which is the right thing to do because they did not really have an infrastructure in the first place. It is going to be interesting to see what they can do from a banking platform viewpoint,” says Mr Hanley.

Handling complexity

One company that does believe the mainframe can be replaced by a grid-based environment is Paremus. Its product, Infiniflow, is an enterprise software solution that is designed to fuse conventional grid computing with messaging middleware and business workflows. According to Dr Richard Nicholson, chief technology officer at Paremus: “We can do complex transactional behaviours, workflow, back office settlement type behaviours. It is a mechanism by which people could remove mainframes and distribute those sort of systems onto a utilitised compute fabric.”

The technology is cost efficient. The central processing unit (CPU) power of low-cost servers or PCs (referred to as “nodes” or “engines”) can be leveraged across the globe, removing the need for proprietary hardware to support a multitude of systems. BoA, for example, has several systems that are sharing computing resources across the Atlantic.

Failure risks reduced

Another advantage is that the risk of failure shifts from the “catastrophic” model to the “degradation” model, whereby a mainframe can fail completely but if a CPU fails in a cluster, the cluster manager can replace it with an operational CPU while other nodes continue to run with no loss of service.

An analogy would be that the loss of a single brain cell does not affect a person’s consciousness. However, grid technology is not without issues of concern.

For a start, it is in an early stage of use at the organisations that have adopted it. Grid is an obvious choice where a problem is “embarrassingly parallel”, says Mr Sturrock. “To value a portfolio of trades, there is a natural division of the work: split the portfolio into individual deals and calculate the value of each as a discrete computational unit. Where a calculation moves through various interdependent states, such as back office, settlement or reconciliation processes, they have the potential to be deployed on grid but that potential has not yet been fully realised.”

At BoA, grid computing has been used for the processes that fit naturally, and the process of applying it to tasks that are more complex to parallelise has begun.

Another drawback is that grid can produce bottlenecks in collecting data, points out Bob Boettcher, vice-president, financial services at Platform Computing. “In a pre-grid environment, you had a three-tier environment. You had a client application that communicated with the calculation engine, and the calculation application needs to get its inputs from the database – things like trade data, market data and scenario data. But with grid technology, instead of one calculation engine, you can literally have a thousand calculation engines and you can parallelise the calculation across all of them. When you think that all of those calculation engines need to get data from the database, the database can become a bottleneck,” Mr Boettcher says.

“One solution available on the systems infrastructure side is called Infiniband,” he says. “This is a high-speed switching fabric, providing low-latency connections between different CPUs in your grid and also CPUs and data sources. So the technology can stream data very quickly from the database into the application, the calculation engine, and in some cases that can help to alleviate the bottle neck.

“Another solution is to use data caching – something built into Platform’s Symphony product,” says Mr Boettcher. “Caches provide access to data that you would otherwise have to get directly from the database. Basically, you move commonly used data to the calculation engine. So when it needs to make a calculation, it doesn’t have to go back to the database.”

As the Tier 1 banks make use of the model, Tier 2 banks may find it harder to achieve the same level of deployment. One technology architect from a Tier 1 bank says that the concept will prove attractive to Tier 2s due to the cost benefits but they do not have the necessary level of resources. “We spent a lot of time getting to our current situation – it has taken 18 months to get to this point and a lot of Tier 2 banks do not have the resources to achieve that. I believe it is likely that they will not be developing the systems internally but are more likely to buy off-the-shelf packages.”

This is supported by research from the Institute of Financial Services, Morse, HP and Oracle, published in a report, Adapting to Change (see graph download).

The payback

In the meantime, those using the grid model are seeing the returns already. “You free up the development resource from building the distribution logic and can focus on the application logic,” says Willy Ross, managing director, EMEA with technology provider Data Synapse. “We have found that some of our customers are telling us they can then improve the time to market of some products by one third. There is cost saving and people cost but the more important thing is the means of bringing these new products to market. With a typical six-month development cycle, you can bring them in two months early.”

In the case of new exotics, he says, this can make a great difference to profits. “If you are inventing new exotics, CDOs to CDO squared to CDO cubed or another extension on that, where you’re making a couple of hundred basis points margin on hundred million dollar deals and you’re bringing that to market a couple of months early, the technology cost is meaningless.”

While cost savings through efficient use of resources is clearly a major driver for adopting grid technology, companies that have already achieved this are seeing further benefits. “We’re really seeing a shift to look more at the top line, rather than looking at the bottom line,” says Mr Ross. “Different banks are at different stages of the thought process. The bigger guys are looking at what it means to bring a product to market early; they can go further up the yield curve, do riskier things, with more complex structures, higher margins. That is what is driving the market.”

Utility – right here, right now

So grid technology is out there. It has been deployed and it is in use although at a somewhat limited level. How far off is the utility model? JP Rangaswami, global chief information officer at Dresdner Kleinwort Wasserstein, says: “Utility grids are not technology vendors trying to sell us old wine in a new bottle. But there is a real need to provide certain computing facilities on demand, in a form that it is a true utility model.”

For all the hype about on-demand and utility, and a clear customer requirement, it seems that finding a vendor prepared to deliver can be challenging. BoA will use a data centre owned by an IT services provider. “We have found that, despite vendors hyping on-demand principles, they are unwilling to move from the traditional outsourcing model,” says Mr Sturrock.

“We talked to several vendors and found only one was capable of partnering with us to pursue the utility model for grid. We will hire hundreds of CPUs in a remote data centre owned by a third party for a proportion of the 24-hour day. We will see the CPUs for our time-slot and the machines will be disconnected from us for the remaining time. The vendor can then sell the additional hours to another institution. For example, it could go to a pharmaceutical or manufacturing company. They may have a massive amount of calculation to be done over a long period but they do not care what time of day it gets done. I don’t think anyone else is doing this on an intra-day basis.”

Why is the adoption of grid technology so low, then? According to Mr Sturrock, banks are rightly concerned about data security. BoA has gone to significant lengths to ensure an efficient but highly secure infrastructure. However, the benefits are clear: sharing corporate resources across banks and other industries results in multi-million dollar cost saving.

PLEASE ENTER YOUR DETAILS TO WATCH THIS VIDEO

All fields are mandatory

The Banker is a service from the Financial Times. The Financial Times Ltd takes your privacy seriously.

Choose how you want us to contact you.

Invites and Offers from The Banker

Receive exclusive personalised event invitations, carefully curated offers and promotions from The Banker



For more information about how we use your data, please refer to our privacy and cookie policies.

Terms and conditions

Join our community

The Banker on Twitter