Banks and Outsourcing: Just Say ‘Latency’

By Derrick Harris

September 24, 2008

A common critique of external cloud computing services is that big-time IT users, like major corporations and financial institutions, are nowhere near getting on board. That might be true for the new breed of “cloud” services, but for the financial services sector, at least, outsourcing is far from a dirty word.

Especially in the world of electronic or algorithmic trading, latency becomes a huge issue that, for some firms, can only be solved by hosting trading platforms in the same datacenters as major exchanges. For those with less-constant, less real-time demands, renting cycles on a global grid architecture, or even renting your own dedicated grid, can guarantee CPUs whenever and wherever they are needed. However they choose to do it — and even if they opt otherwise — the financial world understands the benefits, pitfalls and nuances of outsourced IT.

Ted Chamberlain, research director in the Networking & Communications Services practice at Gartner, believes we’re actually experiencing a bit of an outsourcing renaissance. Traditionally, the financial sector does not like to let technology out of its grip because they view it as such a differentiator, but “I think we’re at the point now where so many of these financial houses either are running out of space or aren’t in the locations they want to be,” says Chamberlain. “Conversely, people like BT Radianz and Savvis actually have started to, I think, build their portfolios to lure [financial institutions] away from their own datacenters.” Especially in the past 12 months, he adds, the proliferation of exchanges going online has drawn a wide range of financial institutions — from investment banks to brokerage houses — to look into hosted datacenters for the sake of interconnecting with the various sources of market data. (Customers with space in one of Savvis’ 29 international datacenters, for example, can cross-connect with anyone in any of the provider’s other locations.)

“It’s definitely becoming a larger trend,” Chamberlain forecasts. “I don’t think we’re going to see a complete flip of all these financial service companies outsourcing to these exchanges in a cloud computing model, but we do see them diversifying and putting some level of their infrastructure in certain providers.”

Although Gartner does not forecast this market, Chamberlain says his gut feeling is that these services will grow somewhere between 30-35 percent year over year, with the EMEA and APAC markets experiencing a doubling in size.

Alex Tabb, a partner in the Crisis & Continuity Services division of the Tabb Group, also sees an increase in outsourcing interest, although not across the board. The big sell-side investment banks experimented with outsourcing several years ago but many have since shied away from the practice, he says. The reason is that potential cost savings were not worth the sacrifices they had to make in terms of control, management and integration. And with such large operations, outsourcing just became too much to handle. For medium-sized banks and smaller buy-side firms, though, Tabb says hosted solutions make a lot of sense because it is easier to pay someone with that expertise than to staff an entire IT department.

Latency: Public Enemy No. 1

One area where there is no contention is the importance of low latency: It is the No. 1 reason financial firms are moving to hosted solutions. Chamberlain calls latency the primary motivator for making the move, particularly when it can be provided without “gargantuantly scaled costs.” Granted, he notes, managed services often bear a premium over simply buying a point-to-point connection or buying traditional content delivery services, but the presence of guaranteed SLAs along with the low latency make it a premium with which they can live.

Tabb says latency is at its most critical in electronic trading scenarios, when trading applications require high-speed access to the exchanges. In particular, he says, “Latency becomes a killer if you’re running VLDBs (very large databases).” An example would be a firm like Bank of New York trading billions of dollars in bonds — for such an operation, Tabb stated, latency will bring down the system. Smaller houses doing algorithmic trading don’t have quite the latency demands as their bigger counterparts because the requirements probably are not as high with 50 people accessing the trading application versus thousands, he added.

“If you’re going to outsource it, it needs to be close — physically, the proximity needs to be close. Because latency, often times, has to do with location,” says Tabb. “If you’re talking about time-sensitive applications, that becomes a deal-breaker.”

Savvis, one of the service providers Chamberlain cites as an industry leader (along with BT Radianz), sees the need for low latency driving customer demand. According to Roji Oommen, director of business development for the financial services vertical at Savvis, latency is a big deal whether firms are doing arbitrage or trying to trying to hide basket trades via division multiplexing and long trade streams. “What they found is it’s a lot cheaper to move your infrastructure inside a datacenter rather than trying to figure out how to optimize your application or get faster hardware and so forth,” he says.

With Savvis’ Proximity Hosting solution (its most popular), which places customers’ trading engines alongside the exchanges’ platforms inside Savvis’ strategically located datacenters, the latency difference compared to in-house solutions is huge, says Oommen. For example, he explains, 80 percent of the New York Stock Exchange’s equities volume occurs over BATS and Archipelago, both of which house their matching engines with Savvis. With Proximity Hosting, he says, latency goes from milliseconds to fractional microseconds. “It would be safe to say,” Oommen adds, “that more and more banks are looking at outsourced datacenter hosting, especially as they participate in markets all over the world, rather than building it in-house — particularly for automated trading.”

In Savvis’ Weehawken, N.J., datacenter, for example, customers share space with the American Stock Exchange, Philadelphia Stock Exchange, BATS Trading, FxAll/Accelor and the New York Stock Exchange.

Xasax, a Naples, Fla.-based company with space at key datacenters across the country (including Savvis Weehawken, Equinix Secaucus, Equinix Cernak and NYSE Metrotech, among others), has created an environment and solution set optimized for hosting financial software that requires access to real-time market data. According to Noah Lieske, Xasax CEO, the customers of the company’s xsProximity solution have the option of housing a trading platform 30 microseconds from NASDAQ. If customers opt to collocate in Equinix Secaucus, they are 1 millisecond from NASDAQ. Essentially, he says, Xasax’s customers want to localize trading logic next to the exchanges, and “they couldn’t do it faster because we built it out the fastest possible route — and if there’s a faster way to do it, we’ll do it.”

“For those who truly understand the low-latency game,” he adds, “it doesn’t take much for them to understand the value of putting their machines at the closest proximity to the exchange as possible.”

Other Driving Forces

Latency, cost and ease of management are not alone in driving financial services customers to outsource. Savvis’ Oommen, for example, credits a deluge of market data with bringing customers into Savvis’ fold. He says the output of options traffic in North America, for example, has increased from 20 MBps to 600 MBps. Therefore, an options trading firm could be staring down a 20x increase (from about $5,000 a month to $100,000 a month) just in raw network connectivity to handle this data load.

From Xasax’s point of view, datacenter power and space constraints also play a role in growing the business. While growth in high-frequency trading is exponential, says Lieske, the fields where Xasax’s customers and the financial institutions collocate are pretty much out of power and space. “As fast as these facilities can be built, they’re being filled up,” he says. By having space in these coveted locations, Lieske sees his company as having “a bit of a monopoly.” However, he acknowledges that limited space sometimes requires Xasax to send customers to secondary locations — a problem that is mitigated by physical cross-connects between the various datacenters. Customers can have a secondary location as their hub but still maintain the lowest possible latency between other datacenters.

In terms of cost, Lieske says a comparably equipped in-house infrastructure could cost a couple of hundred thousand dollars per month versus $20,000-$30,000 with Xasax.

For IBM, who claims a large number of financial customers for its Computing on Demand solutions, the real draw is the ability to handle peak loads without overprovisioning hardware. Christina Cunningham, a project executive on the Computing on Demand division, says IBM’s customers tend to have extreme workloads requiring lots of capacity for short periods of time. These types of jobs include risk management calculations and Monte Carlo simulations. “Why purchase something that you need to have … in your datacenter taking up space 24×7 when you might only need that in a cyclical way?” she asks rhetorically. By renting time on IBM’s global grid, customers can get resources on a dedicated, variable or dynamic basis depending on their needs. Cunningham also cites space and power constraints as a driver, noting that IBM helps customers solve these issues by offering grid resources in three strategic, minimal-latency locations: New York, Japan and London.

But Financial Firms Hate Letting Go of Their Stuff …

Stating that “[i]t is not a one-size-fits-all solution,” Tabb Group’s Tabb says there are technical, management and oversight challenges that outsourcing brings into play. Other key concerns include the effect on the bottom line; continuous capabilities — whether service providers can sustain operations in the event of a power failure or weather event; legal implications of doing computing overseas; and how much control firms require over their data. Tabb says buy-side firms often have very complex algorithms, which they hold dear. “They look at that as the mother lode; that is their firm,” he explains. “They don’t want anyone else to see it, much less touch it.” In terms of continuity, the Tabb Group often tells smaller clients that outsourced solutions offer better results than they can achieve in-house.

“It does take a certain amount of learning curve and risk tolerance to be able to take their trading systems out of their control, so to speak, and into a hosted environment,” says Xasax’s Lieske. Customers have to trust their provider is providing a high-quality environment so the customer can focus on their core competencies rather than the nuts and bolts of running a trading system, he adds. Xasax is in a good position to offer 100 percent uptime because it requires its approximately 40 partners to source things four times (it encourages six), a level of redundancy that allows for multiple failures before a customer might experience ill effects.

Oommen says Savvis has been pretty lucky in terms of having to overcome obstacles. For one, he says, the company can site its hosting of the New York Stock Exchange as a security proof point, as well as many other exchanges and every “brand-name” bank and hedge fund. For basic collocation, Oommen says customer concerns usually center around “is it staffed, is it secure [and] is it outside a nuclear blast radius of New York?”

IBM, says Cunningham, tackles security concerns by offering very secure point-to-point connections, with one datacenter featuring 26 incoming carriers for maximum redundancy. Big Blue also offers a diskless model where CPU and performance are carried out on the grid but no data is left there. However, its biggest security claim probably is IBM’s willingness to work with customers. “Most of the companies know who the manufacturers are that are really secure in the standards they want to abide by in the industry, and so we work with those types of providers to make sure we have the equipment,” Cunningham says.

Additionally, IBM has ethically hacked its infrastructure twice in the past three years “to make sure there’s no way anybody can get in”; complies with ITAR (International Traffic in Arms Regulations) for government customers; and offers “top secret and above” clearance, Cunningham says.

Gartner’s Chamberlain doesn’t see too many issues around security due to the heavy investments made by service providers and the tendency for financial firms to “kick the tires” before adopting any new technology. However, he believes too much growth in the hosting market could actually be an issue. “I don’t think we’re going to see too many instances where information was stolen [or] IP networks were sniffed. I think we’re OK there,” Chamberlain says. “I think the only future potential issue is that with the low-latency requirements, you can only do so much until the speed of light trips you up.”

If scaling gets to the point where providers cannot meet ideal latency levels, Chamberlain wouldn’t be surprised to see a customer like the Chicago Mercantile Exchange bring its systems back in-house and build out its own fiber network. However, he acknowledges, there is a lot of untapped network capacity and dark fiber in this country, so it would take a major growth spike to reach that point.

Virtualization is the Future

Nearly everyone interviewed for this story cites virtualization, in one form or another, as being a key to future growth. Oommen says Savvis has seen a big uptake in virtualization, and the company offers customers the option of running applications in a multi-tenant cloud. He cites asset management firms as a big user here, as they have many end-users who want to see their portfolios in real time — a prime job for Web servers running within the cloud. Of course, he says, there are still concerns around both security and performance along this front, and Savvis doesn’t force anyone to run in a shared environment. In terms of performance, though, he says Web services generally run smoothly on virtualized platforms.

Xasax also is growing through automation and virtualization, from storage to provisioning. One use case the company really likes is the ability to sandbox new clientele in evaluation environments, where potential customers can sign up for a VM, bring it into production, and “get the same connectivity as Credit Suisse or Bank of America,” Lieske says. And while high-frequency traders won’t mess around with a virtual OS, Lieske says they seem to have no problems with virtual storage. Virtualization also allows Xasax to offer a variety of other services, he adds, like production-quality execution management systems, FIX (Financial Information eXchange) gateways for brokers, etc. Internally, virtualization has allowed Xasax to grow with less overhead and management than would have been required in a strictly physical environment.

IBM Computing on Demand’s Cunningham says her division also is seeing an interest in virtualization from clients, particularly around virtual desktops. However, she admits, there are some hurdles to be overcome by customers before they are ready to deploy virtual desktops in real-time scenarios. “When they look at cloud, they’re really focused on server capacity that’s taking up their datacenters,” she explains. “They’re a little bit less concerned about the number of servers taking up their Exchange; what they really want is the high-performance computing, and they want to be able to send it out.” While IBM does have a virtual desktop offering for financial services, it is seeing more questions than takers at this point.

Alex Tabb also sees desktop virtualization taking off, especially among smaller firms. Markets are running 24 hours a day, he says, and traders need to access their terminals no matter where they are in the world, from any computer.  “In today’s economic environment, things change on a dime: the world’s coming to an end, and all of a sudden, in 10 minutes, we’re in the big rally,” he quipped. “Having that flexibility is really important.”

Whatever Works Best

Although the credit crisis of last week was not a failure of IT, Tabb says, the turmoil does underscore the importance of finding the right trading solution, be it in-house or outsourced. And with IT spending among financial service organizations in limbo — a 180-degree turn from “definitely on the rise” six months ago — outsourcing could look even more appealing.

“A strong information technology department and strong capabilities with your human resources and your people can make all the difference in the world,” Tabb says. “Having the ability to react quickly to market changes — and here is where latency becomes a huge issue — can have a significant impact on your trades.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energetic effort,” IBM Research wrote in a blog post. “Therefor Read more…

By Oliver Peckham

Focused on ‘Silicon TAM,’ Intel Puts Gary Patton, Former GlobalFoundries CTO, in Charge of Design Enablement

December 12, 2019

Change within Intel’s upper management – and to its company mission – has continued as a published report has disclosed that chip technology heavyweight Gary Patton, GlobalFoundries’ CTO and R&D SVP as well a Read more…

By Doug Black

Quantum Bits: Rigetti Debuts New Gates, D-Wave Cuts NEC Deal, AWS Jumps into the Quantum Pool

December 12, 2019

There’s been flurry of significant news in the quantum computing world. Yesterday, Rigetti introduced a new family of gates that reduces circuit depth required on some problems and D-Wave struck a deal with NEC to coll Read more…

By John Russell

How Formula 1 Used Cloud HPC to Build the Next Generation of Racing

December 12, 2019

Formula 1, Rob Smedley explained, is maybe the biggest racing spectacle in the world, with five hundred million fans tuning in for every race. Smedley, a chief engineer with Formula 1’s performance engineering and anal Read more…

By Oliver Peckham

RPI Powers Up ‘AiMOS’ AI Supercomputer

December 11, 2019

Designed to push the frontiers of computing chip and systems performance optimized for AI workloads, an 8 petaflops (Linpack) IBM Power9-based supercomputer has been unveiled in upstate New York that will be used by IBM Read more…

By Doug Black

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

GPU Scheduling and Resource Accounting: The Key to an Efficient AI Data Center

[Connect with LSF users and learn new skills in the IBM Spectrum LSF User Community!]

GPUs are the new CPUs

GPUs have become a staple technology in modern HPC and AI data centers. Read more…

At SC19: Developing a Digital Twin

December 11, 2019

In the not too distant future, we can expect to see our skies filled with unmanned aerial vehicles (UAVs) delivering packages, maybe even people, from location to location. In such a world, there will also be a digital twin for each UAV in the fleet: a virtual model that will follow the UAV through its existence, evolving with time. Read more…

By Aaron Dubrow

Focused on ‘Silicon TAM,’ Intel Puts Gary Patton, Former GlobalFoundries CTO, in Charge of Design Enablement

December 12, 2019

Change within Intel’s upper management – and to its company mission – has continued as a published report has disclosed that chip technology heavyweight G Read more…

By Doug Black

Quantum Bits: Rigetti Debuts New Gates, D-Wave Cuts NEC Deal, AWS Jumps into the Quantum Pool

December 12, 2019

There’s been flurry of significant news in the quantum computing world. Yesterday, Rigetti introduced a new family of gates that reduces circuit depth require Read more…

By John Russell

RPI Powers Up ‘AiMOS’ AI Supercomputer

December 11, 2019

Designed to push the frontiers of computing chip and systems performance optimized for AI workloads, an 8 petaflops (Linpack) IBM Power9-based supercomputer has Read more…

By Doug Black

At SC19: Developing a Digital Twin

December 11, 2019

In the not too distant future, we can expect to see our skies filled with unmanned aerial vehicles (UAVs) delivering packages, maybe even people, from location to location. In such a world, there will also be a digital twin for each UAV in the fleet: a virtual model that will follow the UAV through its existence, evolving with time. Read more…

By Aaron Dubrow

Intel’s Jim Clarke on its New Cryo-controller and why Intel isn’t Late to the Quantum Party

December 9, 2019

Intel today introduced the ‘first-of-its-kind’ cryo-controller chip for quantum computing and previewed a cryo-prober tool for characterizing quantum proces Read more…

By John Russell

On the Spack Track @SC19

December 5, 2019

At the annual supercomputing conference, SC19 in Denver, Colorado, there were Spack events each day of the conference. As a reflection of its grassroots heritage, nine sessions were planned by more than a dozen thought leaders from seven organizations, including three U.S. national Department of Energy (DOE) laboratories and Sylabs... Read more…

By Elizabeth Leake

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

AWS Debuts 7nm 2nd-Gen Graviton Arm Processor

December 3, 2019

The “x86 Big Bang,” in which market dominance of the venerable Intel CPU has exploded into fragments of processor options suited to varying workloads, has n Read more…

By Doug Black

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
CEJN
CJEN
DDN
DDN
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

IBM Opens Quantum Computing Center; Announces 53-Qubit Machine

September 19, 2019

Gauging progress in quantum computing is a tricky thing. IBM yesterday announced the opening of the IBM Quantum Computing Center in New York, with five 20-qubit Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This