Share the article
twitter-iconcopy-link-iconprint-icon
share-icon

Crunch time for data centres

The amount of data that financial institutions need to store is constantly increasing, with knock-on effects on data centre capacity, energy use and running costs. Michelle Price reports.
Share the article
twitter-iconcopy-link-iconprint-icon
share-icon

The data centre is the beating heart of all large, modern corporate IT infrastructures. Home to ever-increasingly powerful hardware, it is not only the place in which billions of transactions are processed daily, but it also serves as the organisation’s memory, where vast banks of vital data are amassed and stored. As such, this often invisible facility provides the information resource pool and power by which the entire IT estate – and indeed the wider business – is able to operate.

Nonetheless, data centres – dusty, dirty and unglamorous – have traditionally been situated at the bottom of the IT value chain, says Steve Wallage, managing director of data centre specialist-research house BroadGroup. “It’s one of these things that gets stuck between the IT, building facilities and management departments. All IT departments are reliant on it, but no-one’s quite sure what goes on down there.”

This problem is particularly pronounced in banking, he says. Because banks tend to operate several, often strongly independent IT departments across the business, there is rarely a central point at which IT confronts the data centre itself. Historically, IT has demanded hardware with little consideration for its impact lower down at the data centre level, says Dr Bhaskar Dasgupta, head of strategy and change at global infrastructure, ABN AMRO.

During the past 18 months, however, many banks have had to confront serious problems in their data centre environments. In a recent survey of financial services firms conducted by BroadGroup, 24% of respondents said that their data centres were full. This finding highlights wider problems facing the bank’s IT estate: the vast majority of data centres operated by the banking community are too small, too few or no longer fit for purpose. If the data center is not quite at crisis point – and many commentators have not shied away from describing it as such – it is certainly “a major pain point”, says Mr Wallage.

Storage problems

Addressing the problem is highly complex and costly. Banks have uniquely demanding storage requirements. In recent years, this problem has been compounded by the massive increase in market data that, under regulatary compliance, require banks to store for several years. The Markets in Financial Instruments Directive (MiFID), for example, requires certain data to be kept for a minimum of five years. During the past 10 years, the volumes that must be stored have grown at a staggering rate. In 1998, for example, the New York Stock Exchange logged an average of five million tick data records a day; this figure is now nearer 500 million. Industry experts suggest that this figure could hit one billion before the year is out.

In Europe, the problem is likely to accelerate as market participants respond to the new opportunities represented by MiFID, which has already prompted the emergence of a number of multilateral trading platforms. This trend will inevitably cause market data volumes to escalate further, says Yann L’Huillier, chief technology officer at Turquoise, the London-based multilateral trading platform that is due to go live this September. “The storage problem depends on how many people are going to trade on MiFID: the volume of data is not going to increase because there are more ways to trade, but because the size of the trades will reduce,” he says.

The associated trade data, therefore, will increase by necessity. “The more fragmentation of the market, the more data to store,” says Mr L’Huillier.

For competitive reasons, the front office will often replicate this data and cleanse the second version to support sophisticated events processing, analytics and risk modelling. This effectively doubles the load and increases the capacity and processing power that is required. At one bank that Mr Wallage knows of, up to 40% of the local data centre was occupied by just two traders running Monte Carlo simulations.

Added pressures

The increasing burden on the front office is only one of several convergent forces adding pressure on data centres. Elaine Heyworth, head of environmental management for Barclays retail and commercial banking, has been tasked with addressing the bank’s data centre challenges. She observes that the growing global scope of the bank – much like the majority of its peers – has placed significant demands on storage in recent years. Not only does every single one of Barclays’ 130,000 members of staff operate a PC, she says, but also the demands of travel mean that a large number of employees also have laptops and BlackBerry devices. So the increasing complexity of IT networks necessarily entails a high volume of duplicated data. “The storage facilities on the servers have to keep expanding,” says Ms Heyworth.

In many instances, banks try to build or refit data centres that will support the organisation’s needs for a minimum of two years, she says, but “suddenly, you realise when it’s up and running that it will only last for one”.

Ms Heyworth says that her team’s biggest challenge is to ensure that the new data centre under construction on Barclays’ site in Gloucester “will not be over-capacity within six months of putting it up”. She is not alone in this challenge. In BroadGroup’s recent survey of financial services organisations, 23% of respondents said that their data centres would be full within a year, and a further 34% said they would be full within the next two years.

Power problems

Issues of capacity are serving to compound several other major, if not more pressing, concerns, not the least of which is power. Electricity consumption is the single biggest cost of ownership when operating a data centre. Business demand for high-power computing capability has driven the adoption of more energy-intensive equipment.

In particular, the banking industry’s widespread adoption of blade servers – self-contained computer servers designed for high-density computing – has not helped matters. Blade servers consume about 10 times the amount of electricity used by normal servers, says Aydin Kurt-Eli, CEO of Lumison, a company that offers outsourced data centre facilities.

John Killey has observed this development closely. Head of Citi realty services for EMEA, Mr Killey is responsible for ensuring that the infrastructural integrity of Citi’s data centres can sustain the IT equipment housed inside. “Recently, we’ve found that there have been changes in our technology. Blade servers have come along and now we have 13 kilowatt (kW) cabinet loads, whereas five years ago they were all 5kW loads,” he says.

Last year, the bank’s electricity consumption in its data centres increased by 12.5%, reports Mr Killey. Like many banks, Citi is “getting to the point now where all our systems designs are at the limit of their ability to support the technology in them”, he says.

Feeling the heat

The problem does not end there. The rapacious electricity consumption of ever-increasing high-performance chips and central processing units, coupled with the kinetic activity performed by certain key components – in particular rotating disks – also generates intense heat. Blade servers generate 10 times the amount of heat generated by normal servers: the net result of this activity is effectively the creation of an oven, in which the bank’s physical assets can be baked at temperatures reaching in excess of 400C.

The 451 Group, an analyst house with a specialist division devoted to IT energy-saving, reports a salient story: in previous summers, the heat inside Lehman Brothers’ data centres has reached such a peak that interns have had to be employed just to hose down the facility.

The costs of cooling

The most perverse twist in the story of heat generation, however, is that the cooling measures taken to avoid overheating – cooling towers and chillers – consume by far the most electricity of all the hardware in the data centre. Overall, the total amount of power consumed – even on a relative scale – is huge: in 2007, Lehman Brothers estimated that its four New York-based data centres were consuming the equivalent of 5% of the output of a nuclear power plant, reports The 451 Group. Taken in total, Green Grid, a global consortium devoted to efficient IT (members include Dell, HP, IBM, Intel and Sun Microsystems), reports that data centres consumed 1.5% of the entire electricity supply of the US in 2006.

To say nothing of the detrimental impact on the environment – and the banking industry’s corporate social responsibility agenda – purely on costs, this energy consumption is beginning to hurt; and fossil-fuel energy prices are rising steeply. When SunGard Availability Services, which runs and rents out data centre space to many financial services firms, renegotiated its electricity contract about a year go, it was hit with a 47% price hike practically over night, reports chief stratergy officer Dave Gilpin.

Energy costs in the corporate space are expected to rise by about 10% a year in forthcoming years – although many firms are already reporting a 20% price rise year on year. “I don’t know what my next contract is going to be, but I don’t imagine it’s going to be very pleasant,” says Mr Gilpin.

It cannot, however, be taken for granted that the banks will be able to source the power at all. Traditionally, telecommunications infrastructure has been the so-called deal-breaker when it comes to choosing data centre locations. But power is now the primary concern. In London’s financial outpost, Canary Wharf, for example, experts report that the local grid is running out of power. Meanwhile, the local energy supplier EDF – in order to support the nearby 2012 Olympic Games – has plans to cap the amount of electricity that banks will be able to take from the grid.

Such restrictions are not just a London phenomenon. Power providers in the US are now negotiating contracts with local banks that require the banks to switch to back-up, on-site generator power at peak times. Con Edison in New York and Charlotte-based Duke Energy are two such suppliers. The latter reportedly asked Wachovia Bank to switch to generator power several times last summer.

In addition to power problems, the issue of latency – that is the time it takes to execute electronic trade orders – and the high reputational and financial risk of operational failure, makes locating and securing new, appropriate data centre space a significant challenge for the banking industry. This was again borne out in BroadGroup’s survey of financial services companies, 64% of which said they found it “very difficult” to secure data centre space and associated power.

Expensive solutions

For many of the world’s largest banks, the answer to these challenges is necessarily expensive. Building out a new data centre can cost up to $500m in capital expenditure – no small outlay, particularly in cash-strapped times such as these. The components found in data centres, for example air-conditioning units or onsite generators, are also rising in price, says Mr Gilpin.

Nonetheless, BroadGroup’s Mr Wallage believes that the data centre – as vital as it is to all operations – is generally immune to budget cuts. “The data centre is such a painful issue now, they have no choice,” he says.

Evidence of this seems to be emerging: many top-tier banks are now undertaking massive projects to transform their data centre operations, from the physical fabric of the building, all the way down to the chip-level. Citigroup is a case in point. Mr Killey is overseeing a a major project in Europe to transform the bank’s data centre operations, with a specific focus on building high levels of sustainability into the infrastructure. “Our whole focus has been on rationalising our data centres, on consolidating them so that we can gain economies of scale,” he says.

To some extent, Citigroup’s focus has been on building new data centres, rather than attempting to retrofit existing facilities. This strategy is generally more effective when attempting to deploy the most up-to-date, efficient practices and hardware. Mr Killey’s strategy, when doing so, is to build as much flexibility into the infrastructure as possible in terms of arranging the relationship between components. This strategy makes it easier to identify opportunities for reuse.

“It’s difficult. Sometimes you get it right and sometimes you get it wrong and, unless you have a very good liaison with the technology teams, data centre builders can get it badly wrong,” he says. “But we’re lucky at Citi: we have those good relations and we work with the IT guys to understand what impact their technology will have on the infrastructure.”

Infrastructure constraints

Like Citigroup, Barclays is now building a new data centre on a site that it owns in Gloucester. The centre will be the third on the site; and one of more than 70 belonging to the bank worldwide. This situation is not wholly ideal, Ms Heyworth concedes. “It has its own constraints. We are tied to the infrastructure that exists on that particular plot of land,” she says.

Nonetheless, the bank has been able to invest in some state-of-the-art technologies to prevent some of the problems outlined above. Most notably, these include the use of Hewlett Packard’s dynamic smart cooling, which can dramatically reduce energy use in higher-density data centres. This innovation uses sensors that initiate the supply of cooling to the servers that are working the hardest – which is particularly important for a data centre that, being newly built, is not yet full. “It’s the server rack that’s cooling, so you’re not doubling up cooling all that empty space,” says Ms Heyworth.

Barclays is an early adopter. Many banks are only just beginning to turn their attention to the data centre. When they do, they will find they face a challenge that even vast expenditure might not be able to resolve.

Was this article helpful?

Thank you for your feedback!