Share the article
twitter-iconcopy-link-iconprint-icon
share-icon
FintechMarch 6 2006

Software speed

A chip’s ability to process information is not purely a hardware issue. The task is also dependent on the software application that is giving instructions on how this is to be done.
Share the article
twitter-iconcopy-link-iconprint-icon
share-icon

Mr Paine gives an example: some games consoles have superior hardware but that does not make them ideal for processing risk management strategy, he says. “An Xbox 360 [a popular games console] is probably more powerful than most of the servers in investment banks. However, pure hardware performance isn’t everything; what is also important is speed of application logic processing,” he says.

For this reason, an interesting adoption in recent times has been the uptake of programmable chips known as FPGAs. To understand these chips, imagine that a normal chip runs a program by being fed information that has to be translated for the chip. That translation process takes time and energy. FPGAs are configured to hold a particular application’s logic within them so they have to translate less, thereby speeding up the processing. The chips become specialised to run a particular application and can be reconfigured should they be needed for a different purpose.

By configuring the chip to run a particular application logic, these chips can achieve astonishing results, according to Ken Horn at NGGrid. He says: “With some calculations, we are seeing performance levels improve by a thousand times. As they have a lower power usage compared with other chips, FPGAs can also reduce upkeep costs for server farms as well.”

At 200 watts each, Mr Horn points out that the racks of servers normally found in a data centre may as well be racks of toasters for the power used – FPGAs use 4 or 5 watts. Maintenance costs are significant for server farms (much more than the initial outlay for hardware) so these chips could provide cost reduction on a large scale.

FPGAs’ potential is not limited to residing on servers. Mr Horn suggests that they would normally be provided alongside standard chips in hardware but may prove more flexible: “You could have one in a standalone device, configured for processing tasks such as sub-second pricing for complex products such as CDOs. This could be plugged into your PC as and when you need to perform that task.”

Such specialist hardware is not ideal for all purposes and so the inherent challenge to Moore’s Law – building transistors at a smaller level – is being taken on at HP Labs by Mr Kuekes and Stan Williams. They have been working on a project to beat the current problems inherent in developing nanotech-scale chips. The main issue is that silicon has proven to be an unreliable material to work with at very small scales.

At the smallest sizes, transistors ‘stick’, unable to switch reliably between the on or off that they are supposed to represent. Taking transistor numbers to be a measure of computing power is not absolute but Mr Kuekes says that by allowing more processes to occur in parallel, processing occurs faster: “We put size first, even if the devices that are corresponding are a little bit slower. The speed, as seen by the user, means if you squeeze more processors onto the chip, you can achieve more.”

New transistor materials

To get around the silicon problem has proved difficult but Mr Kuekes believes that the lab’s work will allow continued reductions in transistor size: “A transistor is fundamentally a switch; it is going to connect or not connect. As these devices are getting smaller, it is getting very hard to turn them off. Think of it as a leaky tap on a sink. No matter how hard you twist it, it will not go off. If you have a one when you should have a zero, you have a terrible thing, particularly in financial modelling when this is not affecting the pennies column.”

For this reason, HP Labs has sought a way to use other materials for creating more reliable chips with improved performance. “Once you get down to 10 nanometres or so, silicon gets very weak for quantum mechanical reasons. With our molecular mechanics we are developing something called the ‘crossbar latch’, which uses no silicon or semi-conductor but, for purposes of digital logic, it performs all the equivalent functions you need from a transistor,” says Mr Kuekes.

The way in which the electrons in the material being used for the crossbar latch act in this situation is affected by a property known as quantum mechanical tunnelling. This is the process by which a particle can move from point A to point B without an intermediary position (hence ‘tunnelling’). By changing the tunnelling properties of the materials that it was using, HP Labs was able to affect the way that the latch (described as “a wire, a set of molecules and then another wire that crosses it”) was turned on or off. The appearance of the ‘chip’ is one set of vertical wires lying across a layer of electrically ‘switchable’ material and a set of horizontal wires lying across the other side of the material (see illustration).

cp/15/p24HP nanotech(COLOR).jpg

 

HP’s patented crossbar array, in which molecular-scale wires intersect, sandwiching a layer of electrically switchable material in between

Getting the engine geared up is one thing but the power has to be transferred into results and that means speed, says Mr Kuekes. “Once you fall off the edge of a chip – for example, start passing data around the system – your bandwidth (the number of bits you are going to be able to send per second) drops precipitously.”

Transferring data via electricity on copper wires and cables is too slow. To keep up the pace, banks have been stretching to shunt more data down faster pipes. At network provider BT Radianz, chief technology officer Mark Akass says that constant efficiency gains and the speed of light becoming a material limit may mean physical location is the only factor with real ‘room’ for improvement: “The next step could be the proximity approach, in which people have to be located much more closely between or in offices.”

Erasing duplication

Not all trading venues are physically next to one another so there are limits to the assistance that this can provide but, by minimising duplication of systems and processing, some are pushing for further benefits, says Mr Akass. “There is consolidation. Some of our clients have consolidated various trading venues onto one trading platform. The trend is to collapse as many of these problems as possible into a single issue although you cannot eliminate them completely.”

As ever, the mantra for CIOs is ‘do more with less’.

In the future, the real advances may be found when technologies are built to tackle issues of size, speed and heat by combining advantages of optical and chip technologies. At the University of Texas, Professor Chen has developed a chip using ‘photonic crystals’ made of silicon that can modulate light using an electrical current. Light has to be slowed down for the current to alter the pattern of transmission, which is done via the crystals. Should this succeed as an industrially produced item, the technology could allow interconnection between memory and processors using much less power than normal. In November 2005, Professor Chen quoted a 3 milliwatt requirement for modulation while pointing out that 50% of a computer’s power is consumed by interconnection if it uses a Pentium 4 processor. Not only could this development improve the ‘off-chip’ transmission speed to which Mr Kuekes draws attention , but it could also reduce the heat that processing generates.

It may sound like science fiction now but, with technology iterations developing at an increasing pace, what seem like unbreakable physical barriers may soon be pushed aside thanks to the none-too-small pressure of global commerce.

Was this article helpful?

Thank you for your feedback!

Read more about:  Digital journeys , Fintech