Banks and Outsourcing: Just Say ‘Latency’

By Derrick Harris

September 24, 2008

A common critique of external cloud computing services is that big-time IT users, like major corporations and financial institutions, are nowhere near getting on board. That might be true for the new breed of “cloud” services, but for the financial services sector, at least, outsourcing is far from a dirty word.

Especially in the world of electronic or algorithmic trading, latency becomes a huge issue that, for some firms, can only be solved by hosting trading platforms in the same datacenters as major exchanges. For those with less-constant, less real-time demands, renting cycles on a global grid architecture, or even renting your own dedicated grid, can guarantee CPUs whenever and wherever they are needed. However they choose to do it — and even if they opt otherwise — the financial world understands the benefits, pitfalls and nuances of outsourced IT.

Ted Chamberlain, research director in the Networking & Communications Services practice at Gartner, believes we’re actually experiencing a bit of an outsourcing renaissance. Traditionally, the financial sector does not like to let technology out of its grip because they view it as such a differentiator, but “I think we’re at the point now where so many of these financial houses either are running out of space or aren’t in the locations they want to be,” says Chamberlain. “Conversely, people like BT Radianz and Savvis actually have started to, I think, build their portfolios to lure [financial institutions] away from their own datacenters.” Especially in the past 12 months, he adds, the proliferation of exchanges going online has drawn a wide range of financial institutions — from investment banks to brokerage houses — to look into hosted datacenters for the sake of interconnecting with the various sources of market data. (Customers with space in one of Savvis’ 31 international datacenters, for example, can cross-connect with anyone in any of the provider’s other locations.)

“It’s definitely becoming a larger trend,” Chamberlain forecasts. “I don’t think we’re going to see a complete flip of all these financial service companies outsourcing to these exchanges in a cloud computing model, but we do see them diversifying and putting some level of their infrastructure in certain providers.”

Although Gartner does not forecast this market, Chamberlain says his gut feeling is that these services will grow somewhere between 30-35 percent year over year, with the EMEA and APAC markets experiencing a doubling in size.

Alex Tabb, a partner in the Crisis & Continuity Services division of the Tabb Group, also sees an increase in outsourcing interest, although not across the board. The big sell-side investment banks experimented with outsourcing several years ago but many have since shied away from the practice, he says. The reason is that potential cost savings were not worth the sacrifices they had to make in terms of control, management and integration. And with such large operations, outsourcing just became too much to handle. For medium-sized banks and smaller buy-side firms, though, Tabb says hosted solutions make a lot of sense because it is easier to pay someone with that expertise than to staff an entire IT department.

Latency: Public Enemy No. 1

One area where there is no contention is the importance of low latency: It is the No. 1 reason financial firms are moving to hosted solutions. Chamberlain calls latency the primary motivator for making the move, particularly when it can be provided without “gargantuantly scaled costs.” Granted, he notes, managed services often bear a premium over simply buying a point-to-point connection or buying traditional content delivery services, but the presence of guaranteed SLAs along with the low latency make it a premium with which they can live.

Tabb says latency is at its most critical in electronic trading scenarios, when trading applications require high-speed access to the exchanges. In particular, he says, “Latency becomes a killer if you’re running VLDBs (very large databases).” An example would be a firm like Bank of New York trading billions of dollars in bonds — for such an operation, Tabb stated, latency will bring down the system. Smaller houses doing algorithmic trading don’t have quite the latency demands as their bigger counterparts because the requirements probably are not as high with 50 people accessing the trading application versus thousands, he added.

“If you’re going to outsource it, it needs to be close — physically, the proximity needs to be close. Because latency, often times, has to do with location,” says Tabb. “If you’re talking about time-sensitive applications, that becomes a deal-breaker.”

Savvis, one of the service providers Chamberlain cites as an industry leader (along with BT Radianz), sees the need for low latency driving customer demand. According to Roji Oommen, director of business development for the financial services vertical at Savvis, latency is a big deal whether firms are doing arbitrage or trying to trying to hide basket trades via division multiplexing and long trade streams. “What they found is it’s a lot cheaper to move your infrastructure inside a datacenter rather than trying to figure out how to optimize your application or get faster hardware and so forth,” he says.

With Savvis’ Proximity Hosting solution (its most popular), which places customers’ trading engines alongside the exchanges’ platforms inside Savvis’ strategically located datacenters, the latency difference compared to in-house solutions is huge, says Oommen. For example, he explains, 80 percent of the New York Stock Exchange’s equities volume occurs over BATS and Archipelago, both of which house their matching engines with Savvis. With Proximity Hosting, he says, latency goes from milliseconds to fractional microseconds. “It would be safe to say,” Oommen adds, “that more and more banks are looking at outsourced datacenter hosting, especially as they participate in markets all over the world, rather than building it in-house — particularly for automated trading.”

In Savvis’ Weehawken, N.J., datacenter, for example, customers share space with the American Stock Exchange, Philadelphia Stock Exchange, BATS Trading, FxAll/Accelor and the New York Stock Exchange.

Xasax, a Naples, Fla.-based company with space at key datacenters across the country (including Savvis Weehawken, Equinix Secaucus, Equinix Cernak and NYSE Metrotech, among others), has created an environment and solution set optimized for hosting financial software that requires access to real-time market data. According to Noah Lieske, Xasax CEO, the customers of the company’s xsProximity solution have the option of housing a trading platform 30 microseconds from NASDAQ. If customers opt to collocate in Equinix Secaucus, they are 1 millisecond from NASDAQ. Essentially, he says, Xasax’s customers want to localize trading logic next to the exchanges, and “they couldn’t do it faster because we built it out the fastest possible route — and if there’s a faster way to do it, we’ll do it.”

“For those who truly understand the low-latency game,” he adds, “it doesn’t take much for them to understand the value of putting their machines at the closest proximity to the exchange as possible.”

Other Driving Forces

Latency, cost and ease of management are not alone in driving financial services customers to outsource. Savvis’ Oommen, for example, credits a deluge of market data with bringing customers into Savvis’ fold. He says the output of options traffic in North America, for example, has increased from 20 MBps to 600 MBps. Therefore, an options trading firm could be staring down a 20x increase (from about $5,000 a month to $100,000 a month) just in raw network connectivity to handle this data load.

From Xasax’s point of view, datacenter power and space constraints also play a role in growing the business. While growth in high-frequency trading is exponential, says Lieske, the fields where Xasax’s customers and the financial institutions collocate are pretty much out of power and space. “As fast as these facilities can be built, they’re being filled up,” he says. By having space in these coveted locations, Lieske sees his company as having “a bit of a monopoly.” However, he acknowledges that limited space sometimes requires Xasax to send customers to secondary locations — a problem that is mitigated by physical cross-connects between the various datacenters. Customers can have a secondary location as their hub but still maintain the lowest possible latency between other datacenters.

In terms of cost, Lieske says a comparably equipped in-house infrastructure could cost a couple of hundred thousand dollars per month versus $20,000-$30,000 with Xasax.

For IBM, who claims a large number of financial customers for its Computing on Demand solutions, the real draw is the ability to handle peak loads without overprovisioning hardware. Christina Cunningham, a project executive on the Computing on Demand division, says IBM’s customers tend to have extreme workloads requiring lots of capacity for short periods of time. These types of jobs include risk management calculations and Monte Carlo simulations. “Why purchase something that you need to have … in your datacenter taking up space 24×7 when you might only need that in a cyclical way?” she asks rhetorically. By renting time on IBM’s global grid, customers can get resources on a dedicated, variable or dynamic basis depending on their needs. Cunningham also cites space and power constraints as a driver, noting that IBM helps customers solve these issues by offering grid resources in three strategic, minimal-latency locations: New York, Japan and London.

But Financial Firms Hate Letting Go of Their Stuff …

Stating that “[i]t is not a one-size-fits-all solution,” Tabb Group’s Tabb says there are technical, management and oversight challenges that outsourcing brings into play. Other key concerns include the effect on the bottom line; continuous capabilities — whether service providers can sustain operations in the event of a power failure or weather event; legal implications of doing computing overseas; and how much control firms require over their data. Tabb says buy-side firms often have very complex algorithms, which they hold dear. “They look at that as the mother lode; that is their firm,” he explains. “They don’t want anyone else to see it, much less touch it.” In terms of continuity, the Tabb Group often tells smaller clients that outsourced solutions offer better results than they can achieve in-house.

“It does take a certain amount of learning curve and risk tolerance to be able to take their trading systems out of their control, so to speak, and into a hosted environment,” says Xasax’s Lieske. Customers have to trust their provider is providing a high-quality environment so the customer can focus on their core competencies rather than the nuts and bolts of running a trading system, he adds. Xasax is in a good position to offer 100 percent uptime because it requires its approximately 40 partners to source things four times (it encourages six), a level of redundancy that allows for multiple failures before a customer might experience ill effects.

Oommen says Savvis has been pretty lucky in terms of having to overcome obstacles. For one, he says, the company can site its hosting of the New York Stock Exchange as a security proof point, as well as many other exchanges and every “brand-name” bank and hedge fund. For basic collocation, Oommen says customer concerns usually center around “is it staffed, is it secure [and] is it outside a nuclear blast radius of New York?”

IBM, says Cunningham, tackles security concerns by offering very secure point-to-point connections, with one datacenter featuring 26 incoming carriers for maximum redundancy. Big Blue also offers a diskless model where CPU and performance are carried out on the grid but no data is left there. However, its biggest security claim probably is IBM’s willingness to work with customers. “Most of the companies know who the manufacturers are that are really secure in the standards they want to abide by in the industry, and so we work with those types of providers to make sure we have the equipment,” Cunningham says.

Additionally, IBM has ethically hacked its infrastructure twice in the past three years “to make sure there’s no way anybody can get in”; complies with ITAR (International Traffic in Arms Regulations) for government customers; and offers “top secret and above” clearance, Cunningham says.

Gartner’s Chamberlain doesn’t see too many issues around security due to the heavy investments made by service providers and the tendency for financial firms to “kick the tires” before adopting any new technology. However, he believes too much growth in the hosting market could actually be an issue. “I don’t think we’re going to see too many instances where information was stolen [or] IP networks were sniffed. I think we’re OK there,” Chamberlain says. “I think the only future potential issue is that with the low-latency requirements, you can only do so much until the speed of light trips you up.”

If scaling gets to the point where providers cannot meet ideal latency levels, Chamberlain wouldn’t be surprised to see a customer like the Chicago Mercantile Exchange bring its systems back in-house and build out its own fiber network. However, he acknowledges, there is a lot of untapped network capacity and dark fiber in this country, so it would take a major growth spike to reach that point.

Virtualization is the Future

Nearly everyone interviewed for this story cites virtualization, in one form or another, as being a key to future growth. Oommen says Savvis has seen a big uptake in virtualization, and the company offers customers the option of running applications in a multi-tenant cloud. He cites asset management firms as a big user here, as they have many end-users who want to see their portfolios in real time — a prime job for Web servers running within the cloud. Of course, he says, there are still concerns around both security and performance along this front, and Savvis doesn’t force anyone to run in a shared environment. In terms of performance, though, he says Web services generally run smoothly on virtualized platforms.

Xasax also is growing through automation and virtualization, from storage to provisioning. One use case the company really likes is the ability to sandbox new clientele in evaluation environments, where potential customers can sign up for a VM, bring it into production, and “get the same connectivity as Credit Suisse or Bank of America,” Lieske says. And while high-frequency traders won’t mess around with a virtual OS, Lieske says they seem to have no problems with virtual storage. Virtualization also allows Xasax to offer a variety of other services, he adds, like production-quality execution management systems, FIX (Financial Information eXchange) gateways for brokers, etc. Internally, virtualization has allowed Xasax to grow with less overhead and management than would have been required in a strictly physical environment.

IBM Computing on Demand’s Cunningham says her division also is seeing an interest in virtualization from clients, particularly around virtual desktops. However, she admits, there are some hurdles to be overcome by customers before they are ready to deploy virtual desktops in real-time scenarios. “When they look at cloud, they’re really focused on server capacity that’s taking up their datacenters,” she explains. “They’re a little bit less concerned about the number of servers taking up their Exchange; what they really want is the high-performance computing, and they want to be able to send it out.” While IBM does have a virtual desktop offering for financial services, it is seeing more questions than takers at this point.

Alex Tabb also sees desktop virtualization taking off, especially among smaller firms. Markets are running 24 hours a day, he says, and traders need to access their terminals no matter where they are in the world, from any computer.  “In today’s economic environment, things change on a dime: the world’s coming to an end, and all of a sudden, in 10 minutes, we’re in the big rally,” he quipped. “Having that flexibility is really important.”

Whatever Works Best

Although the credit crisis of last week was not a failure of IT, Tabb says, the turmoil does underscore the importance of finding the right trading solution, be it in-house or outsourced. And with IT spending among financial service organizations in limbo — a 180-degree turn from “definitely on the rise” six months ago — outsourcing could look even more appealing.

“A strong information technology department and strong capabilities with your human resources and your people can make all the difference in the world,” Tabb says. “Having the ability to react quickly to market changes — and here is where latency becomes a huge issue — can have a significant impact on your trades.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

Nvidia Appoints Andy Grant as EMEA Director of Supercomputing, Higher Education, and AI

March 22, 2024

Nvidia recently appointed Andy Grant as Director, Supercomputing, Higher Education, and AI for Europe, the Middle East, and Africa (EMEA). With over 25 years of high-performance computing (HPC) experience, Grant brings a Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire