Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
September 27, 2011

On Wall Street, The Race to Zero Continues

L.S. Salamone

To address today’s need for speedier transaction processing and to handle the associated surge in message traffic, financial services firms are examining every aspect of their infrastructures to squeeze any delays out of their end-to-end computational workflows. This quest to lower latencies in each step of processing trades, and to perform other chores, was a common theme at the High Performance Computing Financial Markets Conference held in New York, last week.

In session after session, panelists (representing the vendor and user communities) discussed how they were addressing the latency issue in the so-called “race to zero.”

Lee Fisher, Worldwide Business Development manager for HP’s scalable, high performance solutions in Financial Services, kicked off the conference with a roundtable discussion that put the latency issue into perspective.

“Latency equals risk,” said Fisher. “The issue is not about latency for latency’s sake, it’s about managing latency to reduce risk.” He and others at the conference noted, for example, that if data is out of sync, an organization is making decisions base on old information compared to their competitors.

Throughout the day, speakers discussed how their organizations were working to reduce latency in financial services trading systems. And while reducing latency takes a coordinated systems approach, the bulk of the day’s discussions highlighted how every aspect of the end-to-end operation is being closely examined. This includes looking at CPU performance, data movement, timing systems, and the applications themselves. 

On the HPC hardware side, a number of companies noted the technologies that are now being employed to help reduce latency. For example, Joseph Curley, Director, Technical Computing Marketing at Intel, talked about efforts to accelerate data analysis-driven decisions in financial services firms.

“Intel was asked to produce a processor that focused on reducing latency,” said Curley. He pointed to the Xeon class of processors, noting in particular the use of the X5698. The Xeon X5698 is built on Westemere microarchitecture, and shares many features with other Xeon 5600 chips. However, there is one major difference. It offers extremely high core frequency of up to 4.4 GHz. (HP, for example, offers a special DL server with this processor for financial services firms.)

Looking a bit to the future, Curley said he expects newer Intel Sandy Bridge processors will find favor in financial services firms. Also looking ahead, Don Newell, CTO, Server Product Group at AMD, in a different session, mentioned the recent release of its Bulldozer microprocessor architecture, which offers advanced performance per watt. Bulldozer will be implemented in AMD’s Interlagos (Opteron 6200) CPUs and, according to Newell these chips will offer substantially higher integer performance than the previous generation Istanbul processors.

Several of the conference panelists throughout the day noted other efforts to reduce latency are aimed at increasing the performance of code running on multi-core processors. Work here is focused on threading and increasing parallelism.

End-user organizations are definitely keeping an eye on this type of work. Jens von der Heide, Director at Barclays Capital, chimed into the discussion noting that there is an interest in newer processors. He noted that from a pricing perspective, newer processors “continue to be very attractive, making it easy to migrate.” As has been the case for years, newer, top performance processors now cost the same as the top-level processors of old.

Relating back to Curley’s comment about the work on increased parallelism, von der Heide agreed that there is certainly more discussion today about the role of single-thread versus multi-thread. However, from his perspective, “the big issue is that when many threads are running, you want all of them to go faster, so what it comes down to is [we] want more cores.”

As advances are made in these areas, other aspects of trading processing workflows, such as I/O, naturally need attention. “If you have a faster server, you then need to move data into it faster,” said Doron Arad, Client Solutions Director at Mellanox. “For that you need a low-latency network.”

Arad note the issue comes down to how do you push data into memory? To accomplish this, technologies that are of interest include non-blocking fabrics, kernel bypass approaches, message acceleration, and remote direct memory access (RDMA).

Additionally, organizations are broadening their focus on the network. In the past, companies would look at the switch and cabling; now they also include the NIC in the network latency discussion. To that end, he noted that there is wide-scale use of InfiniBand in the financial services market. Echoing that point, Fisher noted he was seeing growing use of InfiniBand, as well as 10G Ethernet NICs.

Arad added that he was also seeing demand for a NIC that supports both Ethernet and InfiniBand. Rob Cornish, IT Strategy and Infrastructure Officer at the International Securities Exchange (ISE), agreed, noting that he’d like to be able to chose InfiniBand or Ethernet based on a particular application’s needs.

Adding to the discussion, several conference participants said that in addition to 10G Ethernet NICs for their servers, they were also looking for 40G Ethernet support in their interconnect fabric switches.

Application Acceleration and Data Feeds

As financial services firms cut latency with improvements in server and networking hardware, the next place to look for performance improvements is in the applications. That was the topic of discussion of an afternoon session looking at ways to architect the best solutions for what was called “Wall Street Optimization.”

In particular, David Rubio, Senior Consultant at Forsythe Solutions Group, discussed the need to profile applications. He noted that financial services organizations need to use appropriate tools to get insight into the bottlenecks that may be happening with each critical application.

One challenge is that assumptions are sometimes made that prove wrong. For instance, an organization may think an application is optimized because it may be using the latest compiler, but might still suffer from low performance due to use of libraries routines that are 20 years old. Or an application might be using a garbage collection algorithm that is not appropriate for a specific process.

“You need visibility into the applications and how they are interacting with the system,” said Rubio. He noted the basic problem comes down to this: “You have software running as a thread on a core. What is the thread doing? Is it executing code or waiting for something? If a thread is blocked, what is it waiting for?” He noted the need to use common OS tools like truss, strace, snoop, and tcpdump, or DTrace on Solaris systems.            

There was also talk of using LANZ, Arista Network’s Latency Analyzer. Within the financial services market, LANZ is used to get more visibility into the network to see if microbursts of trading activity are happening or not. It offers sub-millisecond reporting intervals such that congestion can be detected and application-layer messages sent faster than some products can forward a packet.  

Complementing these approaches, some panelists talked about the need for hardware acceleration, including the use of FGPAs and GPUs to improve application performance. However, one conference attendee raised the point that in some applications, such as those found in the futures market, there are lots of changes in the data feeds, so it is hard to implement that onto exotic hardware.

That triggered some discussion about the issue of data feeds. Certainly, optimizing a firm’s hardware and software can only cut latencies so far if the external data needed in the calculations and operations (and supplied by the major exchanges) is not delivered in a timely manner. The exchanges that provide the data are turning to technologies like co-location and hardware acceleration to reduce any delays on their end of the operation.

As the discussion evolved throughout the day at the conference, it became increasingly clear that with all of these aspects involved in the race for zero, a key element in latency reduction is the role of the systems integrator.

The consensus at the conference was that the way to reduce latency was to take a solutions approach, perhaps managed by a systems integrator. The approach would need to examine ways to optimize systems to reduce delays in CPU performance, host performance, networking including the NIC, and the data feeds from the major exchanges.