Banks and Outsourcing: Just Say ‘Latency’

By Derrick Harris

September 24, 2008

A common critique of external cloud computing services is that big-time IT users, like major corporations and financial institutions, are nowhere near getting on board. That might be true for the new breed of “cloud” services, but for the financial services sector, at least, outsourcing is far from a dirty word.

Especially in the world of electronic or algorithmic trading, latency becomes a huge issue that, for some firms, can only be solved by hosting trading platforms in the same datacenters as major exchanges. For those with less-constant, less real-time demands, renting cycles on a global grid architecture, or even renting your own dedicated grid, can guarantee CPUs whenever and wherever they are needed. However they choose to do it — and even if they opt otherwise — the financial world understands the benefits, pitfalls and nuances of outsourced IT.

Ted Chamberlain, research director in the Networking & Communications Services practice at Gartner, believes we’re actually experiencing a bit of an outsourcing renaissance. Traditionally, the financial sector does not like to let technology out of its grip because they view it as such a differentiator, but “I think we’re at the point now where so many of these financial houses either are running out of space or aren’t in the locations they want to be,” says Chamberlain. “Conversely, people like BT Radianz and Savvis actually have started to, I think, build their portfolios to lure [financial institutions] away from their own datacenters.” Especially in the past 12 months, he adds, the proliferation of exchanges going online has drawn a wide range of financial institutions — from investment banks to brokerage houses — to look into hosted datacenters for the sake of interconnecting with the various sources of market data. (Customers with space in one of Savvis’ 29 international datacenters, for example, can cross-connect with anyone in any of the provider’s other locations.)

“It’s definitely becoming a larger trend,” Chamberlain forecasts. “I don’t think we’re going to see a complete flip of all these financial service companies outsourcing to these exchanges in a cloud computing model, but we do see them diversifying and putting some level of their infrastructure in certain providers.”

Although Gartner does not forecast this market, Chamberlain says his gut feeling is that these services will grow somewhere between 30-35 percent year over year, with the EMEA and APAC markets experiencing a doubling in size.

Alex Tabb, a partner in the Crisis & Continuity Services division of the Tabb Group, also sees an increase in outsourcing interest, although not across the board. The big sell-side investment banks experimented with outsourcing several years ago but many have since shied away from the practice, he says. The reason is that potential cost savings were not worth the sacrifices they had to make in terms of control, management and integration. And with such large operations, outsourcing just became too much to handle. For medium-sized banks and smaller buy-side firms, though, Tabb says hosted solutions make a lot of sense because it is easier to pay someone with that expertise than to staff an entire IT department.

Latency: Public Enemy No. 1

One area where there is no contention is the importance of low latency: It is the No. 1 reason financial firms are moving to hosted solutions. Chamberlain calls latency the primary motivator for making the move, particularly when it can be provided without “gargantuantly scaled costs.” Granted, he notes, managed services often bear a premium over simply buying a point-to-point connection or buying traditional content delivery services, but the presence of guaranteed SLAs along with the low latency make it a premium with which they can live.

Tabb says latency is at its most critical in electronic trading scenarios, when trading applications require high-speed access to the exchanges. In particular, he says, “Latency becomes a killer if you’re running VLDBs (very large databases).” An example would be a firm like Bank of New York trading billions of dollars in bonds — for such an operation, Tabb stated, latency will bring down the system. Smaller houses doing algorithmic trading don’t have quite the latency demands as their bigger counterparts because the requirements probably are not as high with 50 people accessing the trading application versus thousands, he added.

“If you’re going to outsource it, it needs to be close — physically, the proximity needs to be close. Because latency, often times, has to do with location,” says Tabb. “If you’re talking about time-sensitive applications, that becomes a deal-breaker.”

Savvis, one of the service providers Chamberlain cites as an industry leader (along with BT Radianz), sees the need for low latency driving customer demand. According to Roji Oommen, director of business development for the financial services vertical at Savvis, latency is a big deal whether firms are doing arbitrage or trying to trying to hide basket trades via division multiplexing and long trade streams. “What they found is it’s a lot cheaper to move your infrastructure inside a datacenter rather than trying to figure out how to optimize your application or get faster hardware and so forth,” he says.

With Savvis’ Proximity Hosting solution (its most popular), which places customers’ trading engines alongside the exchanges’ platforms inside Savvis’ strategically located datacenters, the latency difference compared to in-house solutions is huge, says Oommen. For example, he explains, 80 percent of the New York Stock Exchange’s equities volume occurs over BATS and Archipelago, both of which house their matching engines with Savvis. With Proximity Hosting, he says, latency goes from milliseconds to fractional microseconds. “It would be safe to say,” Oommen adds, “that more and more banks are looking at outsourced datacenter hosting, especially as they participate in markets all over the world, rather than building it in-house — particularly for automated trading.”

In Savvis’ Weehawken, N.J., datacenter, for example, customers share space with the American Stock Exchange, Philadelphia Stock Exchange, BATS Trading, FxAll/Accelor and the New York Stock Exchange.

Xasax, a Naples, Fla.-based company with space at key datacenters across the country (including Savvis Weehawken, Equinix Secaucus, Equinix Cernak and NYSE Metrotech, among others), has created an environment and solution set optimized for hosting financial software that requires access to real-time market data. According to Noah Lieske, Xasax CEO, the customers of the company’s xsProximity solution have the option of housing a trading platform 30 microseconds from NASDAQ. If customers opt to collocate in Equinix Secaucus, they are 1 millisecond from NASDAQ. Essentially, he says, Xasax’s customers want to localize trading logic next to the exchanges, and “they couldn’t do it faster because we built it out the fastest possible route — and if there’s a faster way to do it, we’ll do it.”

“For those who truly understand the low-latency game,” he adds, “it doesn’t take much for them to understand the value of putting their machines at the closest proximity to the exchange as possible.”

Other Driving Forces

Latency, cost and ease of management are not alone in driving financial services customers to outsource. Savvis’ Oommen, for example, credits a deluge of market data with bringing customers into Savvis’ fold. He says the output of options traffic in North America, for example, has increased from 20 MBps to 600 MBps. Therefore, an options trading firm could be staring down a 20x increase (from about $5,000 a month to $100,000 a month) just in raw network connectivity to handle this data load.

From Xasax’s point of view, datacenter power and space constraints also play a role in growing the business. While growth in high-frequency trading is exponential, says Lieske, the fields where Xasax’s customers and the financial institutions collocate are pretty much out of power and space. “As fast as these facilities can be built, they’re being filled up,” he says. By having space in these coveted locations, Lieske sees his company as having “a bit of a monopoly.” However, he acknowledges that limited space sometimes requires Xasax to send customers to secondary locations — a problem that is mitigated by physical cross-connects between the various datacenters. Customers can have a secondary location as their hub but still maintain the lowest possible latency between other datacenters.

In terms of cost, Lieske says a comparably equipped in-house infrastructure could cost a couple of hundred thousand dollars per month versus $20,000-$30,000 with Xasax.

For IBM, who claims a large number of financial customers for its Computing on Demand solutions, the real draw is the ability to handle peak loads without overprovisioning hardware. Christina Cunningham, a project executive on the Computing on Demand division, says IBM’s customers tend to have extreme workloads requiring lots of capacity for short periods of time. These types of jobs include risk management calculations and Monte Carlo simulations. “Why purchase something that you need to have … in your datacenter taking up space 24×7 when you might only need that in a cyclical way?” she asks rhetorically. By renting time on IBM’s global grid, customers can get resources on a dedicated, variable or dynamic basis depending on their needs. Cunningham also cites space and power constraints as a driver, noting that IBM helps customers solve these issues by offering grid resources in three strategic, minimal-latency locations: New York, Japan and London.

But Financial Firms Hate Letting Go of Their Stuff …

Stating that “[i]t is not a one-size-fits-all solution,” Tabb Group’s Tabb says there are technical, management and oversight challenges that outsourcing brings into play. Other key concerns include the effect on the bottom line; continuous capabilities — whether service providers can sustain operations in the event of a power failure or weather event; legal implications of doing computing overseas; and how much control firms require over their data. Tabb says buy-side firms often have very complex algorithms, which they hold dear. “They look at that as the mother lode; that is their firm,” he explains. “They don’t want anyone else to see it, much less touch it.” In terms of continuity, the Tabb Group often tells smaller clients that outsourced solutions offer better results than they can achieve in-house.

“It does take a certain amount of learning curve and risk tolerance to be able to take their trading systems out of their control, so to speak, and into a hosted environment,” says Xasax’s Lieske. Customers have to trust their provider is providing a high-quality environment so the customer can focus on their core competencies rather than the nuts and bolts of running a trading system, he adds. Xasax is in a good position to offer 100 percent uptime because it requires its approximately 40 partners to source things four times (it encourages six), a level of redundancy that allows for multiple failures before a customer might experience ill effects.

Oommen says Savvis has been pretty lucky in terms of having to overcome obstacles. For one, he says, the company can site its hosting of the New York Stock Exchange as a security proof point, as well as many other exchanges and every “brand-name” bank and hedge fund. For basic collocation, Oommen says customer concerns usually center around “is it staffed, is it secure [and] is it outside a nuclear blast radius of New York?”

IBM, says Cunningham, tackles security concerns by offering very secure point-to-point connections, with one datacenter featuring 26 incoming carriers for maximum redundancy. Big Blue also offers a diskless model where CPU and performance are carried out on the grid but no data is left there. However, its biggest security claim probably is IBM’s willingness to work with customers. “Most of the companies know who the manufacturers are that are really secure in the standards they want to abide by in the industry, and so we work with those types of providers to make sure we have the equipment,” Cunningham says.

Additionally, IBM has ethically hacked its infrastructure twice in the past three years “to make sure there’s no way anybody can get in”; complies with ITAR (International Traffic in Arms Regulations) for government customers; and offers “top secret and above” clearance, Cunningham says.

Gartner’s Chamberlain doesn’t see too many issues around security due to the heavy investments made by service providers and the tendency for financial firms to “kick the tires” before adopting any new technology. However, he believes too much growth in the hosting market could actually be an issue. “I don’t think we’re going to see too many instances where information was stolen [or] IP networks were sniffed. I think we’re OK there,” Chamberlain says. “I think the only future potential issue is that with the low-latency requirements, you can only do so much until the speed of light trips you up.”

If scaling gets to the point where providers cannot meet ideal latency levels, Chamberlain wouldn’t be surprised to see a customer like the Chicago Mercantile Exchange bring its systems back in-house and build out its own fiber network. However, he acknowledges, there is a lot of untapped network capacity and dark fiber in this country, so it would take a major growth spike to reach that point.

Virtualization is the Future

Nearly everyone interviewed for this story cites virtualization, in one form or another, as being a key to future growth. Oommen says Savvis has seen a big uptake in virtualization, and the company offers customers the option of running applications in a multi-tenant cloud. He cites asset management firms as a big user here, as they have many end-users who want to see their portfolios in real time — a prime job for Web servers running within the cloud. Of course, he says, there are still concerns around both security and performance along this front, and Savvis doesn’t force anyone to run in a shared environment. In terms of performance, though, he says Web services generally run smoothly on virtualized platforms.

Xasax also is growing through automation and virtualization, from storage to provisioning. One use case the company really likes is the ability to sandbox new clientele in evaluation environments, where potential customers can sign up for a VM, bring it into production, and “get the same connectivity as Credit Suisse or Bank of America,” Lieske says. And while high-frequency traders won’t mess around with a virtual OS, Lieske says they seem to have no problems with virtual storage. Virtualization also allows Xasax to offer a variety of other services, he adds, like production-quality execution management systems, FIX (Financial Information eXchange) gateways for brokers, etc. Internally, virtualization has allowed Xasax to grow with less overhead and management than would have been required in a strictly physical environment.

IBM Computing on Demand’s Cunningham says her division also is seeing an interest in virtualization from clients, particularly around virtual desktops. However, she admits, there are some hurdles to be overcome by customers before they are ready to deploy virtual desktops in real-time scenarios. “When they look at cloud, they’re really focused on server capacity that’s taking up their datacenters,” she explains. “They’re a little bit less concerned about the number of servers taking up their Exchange; what they really want is the high-performance computing, and they want to be able to send it out.” While IBM does have a virtual desktop offering for financial services, it is seeing more questions than takers at this point.

Alex Tabb also sees desktop virtualization taking off, especially among smaller firms. Markets are running 24 hours a day, he says, and traders need to access their terminals no matter where they are in the world, from any computer.  “In today’s economic environment, things change on a dime: the world’s coming to an end, and all of a sudden, in 10 minutes, we’re in the big rally,” he quipped. “Having that flexibility is really important.”

Whatever Works Best

Although the credit crisis of last week was not a failure of IT, Tabb says, the turmoil does underscore the importance of finding the right trading solution, be it in-house or outsourced. And with IT spending among financial service organizations in limbo — a 180-degree turn from “definitely on the rise” six months ago — outsourcing could look even more appealing.

“A strong information technology department and strong capabilities with your human resources and your people can make all the difference in the world,” Tabb says. “Having the ability to react quickly to market changes — and here is where latency becomes a huge issue — can have a significant impact on your trades.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

European HPC Summit Week and PRACEdays 2018: Slaying Dragons and SHAPEing Futures One SME at a Time

June 20, 2018

The University of Ljubljana in Slovenia hosted the third annual EHPCSW18 and fifth annual PRACEdays18 events which opened May 29, 2018. The conference was chaired by PRACE Council Vice-Chair Sergi Girona (Barcelona Super Read more…

By Elizabeth Leake (STEM-Trek for HPCwire)

An Overview of ‘OpenACC for Programmers’ from the Book’s Editors

June 20, 2018

In an era of multicore processors coupled with manycore accelerators in all kinds of devices from smartphones all the way to supercomputers, it is important to train current and future computational scientists of all dom Read more…

By Sunita Chandrasekaran and Guido Juckeland

Cray Introduces All Flash Lustre Storage Solution Targeting HPC

June 19, 2018

Citing the rise of IOPS-intensive workflows and more affordable flash technology, Cray today introduced the L300F, a scalable all-flash storage solution whose primary use case is to support high IOPS rates to/from a scra Read more…

By John Russell

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Preview the World’s Smartest Supercomputer at ISC 2018

Introducing an accelerated IT infrastructure for HPC & AI workloads Read more…

Lenovo to Debut ‘Neptune’ Cooling Technologies at ISC

June 19, 2018

Lenovo today announced a set of cooling technologies, dubbed Neptune, that include direct to node (DTN) warm water cooling, rear door heat exchanger (RDHX), and hybrid solutions that combine air and liquid cooling. Lenov Read more…

By John Russell

European HPC Summit Week and PRACEdays 2018: Slaying Dragons and SHAPEing Futures One SME at a Time

June 20, 2018

The University of Ljubljana in Slovenia hosted the third annual EHPCSW18 and fifth annual PRACEdays18 events which opened May 29, 2018. The conference was chair Read more…

By Elizabeth Leake (STEM-Trek for HPCwire)

Cray Introduces All Flash Lustre Storage Solution Targeting HPC

June 19, 2018

Citing the rise of IOPS-intensive workflows and more affordable flash technology, Cray today introduced the L300F, a scalable all-flash storage solution whose p Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

The Machine Learning Hype Cycle and HPC

June 14, 2018

Like many other HPC professionals I’m following the hype cycle around machine learning/deep learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectation’ but not quite yet starting the descent into the ‘trough of disillusionment. This still raises the probability that... Read more…

By Dairsie Latimer

Xiaoxiang Zhu Receives the 2018 PRACE Ada Lovelace Award for HPC

June 13, 2018

Xiaoxiang Zhu, who works for the German Aerospace Center (DLR) and Technical University of Munich (TUM), was awarded the 2018 PRACE Ada Lovelace Award for HPC for her outstanding contributions in the field of high performance computing (HPC) in Europe. Read more…

By Elizabeth Leake

U.S Considering Launch of National Quantum Initiative

June 11, 2018

Sometime this month the U.S. House Science Committee will introduce legislation to launch a 10-year National Quantum Initiative, according to a recent report by Read more…

By John Russell

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

Exascale USA – Continuing to Move Forward

June 6, 2018

The end of May 2018, saw several important events that continue to advance the Department of Energy’s (DOE) Exascale Computing Initiative (ECI) for the United Read more…

By Alex R. Larzelere

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Sympo Read more…

By Staff

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

Google I/O 2018: AI Everywhere; TPU 3.0 Delivers 100+ Petaflops but Requires Liquid Cooling

May 9, 2018

All things AI dominated discussion at yesterday’s opening of Google’s I/O 2018 developers meeting covering much of Google's near-term product roadmap. The e Read more…

By John Russell

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Google Charts Two-Dimensional Quantum Course

April 26, 2018

Quantum error correction, essential for achieving universal fault-tolerant quantum computation, is one of the main challenges of the quantum computing field and it’s top of mind for Google’s John Martinis. At a presentation last week at the HPC User Forum in Tucson, Martinis, one of the world's foremost experts in quantum computing, emphasized... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This