Financial Services and Beyond: Platform’s Grid Roadmap

By By Derrick Harris, Editor

April 30, 2007

When Platform Computing announced a few weeks ago that it is increasing its investment in the financial services market, it meant a lot more than just hiring vice presidents to serve the United States and Europe (which Platform did) and increasing sales efforts; it is an attempt to make a big imprint in a vertical market that is not only primed for grid computing, but for whatever evolutionary steps the technology takes in the years to come.

While the company certainly has not been ignoring financial services customers, as evidenced by being able to call financial giants like JPMorgan, Citigroup and Lehman Brothers, among others, customers, Platform has been more focused, according to founder and CEO Songnian Zhou, on enabling successful grid deployments for its stable of big-name clients. However, with the market continuing to grow and expand in terms of grid use, especially when it comes to analytic applications and investment banks, Zhou believes financial services is now ready to receive a big push from Platform, and he expects the company to gain a large percentage of the market share as a result of what he believes — not surprisingly — is the best technology portfolio going.

As for the men charged with leading this new, focused charge into the financial world, Zhou expects them to place the company in the same positions he believes it holds in the electronics and industrial manufacturing markets: No. 1. Said Zhou, he expects Jim Mancuso and Charles Jarvis, vice presidents of financial services for North America and EMEA, respectively, to “drive broader adoption of grid computing, not only in the top-tier investment banks … but also [in] second-tier investment banks and hedge funds” and to ensure product and market share leadership.

Mancuso, who has been selling software to Wall Street and the financial market for the last 20 years, says he is up to the challenge. One reason for this is out of personal pride, because, as he said, “At the end of the day, it really comes down to our clients’ success … I’m inextricably attached by the hip to my clients’ success.” And for financial services firms, noted Mancuso, success is defined in dollars, which makes his role that much more important. Continued Mancuso: “We’re not talking about thousands of dollars or a couple million dollars. We’re potentially talking about hundreds of millions of dollars or above that relate directly to their bottom lines.”

That kind of mindset would explain Mancuso’s belief that while technology definitely is a big part of the sales pitch, one would be foolish to ignore the importance of building real relationships with clients. Since joining Platform earlier this month, Mancuso said one thing that really sticks out is this type of partnership attitude. “There’s an attitude,” he explained, “of really developing solutions with [clients], listening to what they have to say and then trying to bring solutions to market that don’t just have whiz-bang technology, but solve real business problems.”

As for what strategy the company is taking to realize its goals of dominating the financial services field, Mancuso echoes Zhou’s sentiment, stating that “the first battle is being fought in the investment banking space,” necessitating significant initial expenditures of energy and effort along that front. However, aside from going after the low-hanging fruit (i.e., investment banks and their Monte Carlo simulations and risk analysis applications), Mancuso believes Platform also can make a big splash — and an even bigger name for itself — by helping customers get beyond the first phase of grid computing and into real sharing across lines of business.

Lehman Brothers’ Global Grid Plans

This brings us to Lehman Brothers, the 163-year-old financial services firm that in January 2006 chose Platform to implement its company-wide grid infrastructure. But for Lehman Brothers, which has been working with grid-like technologies and writing distributed software since 1992, it was about more than simply getting started with a hot new technology. According to Thanos Mitsolides, senior vice president of fixed income technology and analytics at Lehman Brothers, “This wasn’t an effort just to find a solution. This was an effort to find an enterprise solution for a very long time.” He isn’t overstating the situation.

“We know exactly where we want to be five years from now,” said Mitsolides, “and we are taking very gradual steps in that direction.” Lehman Brothers’ first grid pilot went into production in March 2006, its second in June and its third in September, all within the fixed income division. The steps of the five-year (estimated) plan, said Mitsolides, are to give each group within the company its own grid; then to begin merging these grids; followed by an effort to have groups and individuals start sharing resources; and, finally, to spread the grid infrastructure across Lehman Brothers’ global outposts.

In parallel with this, he added, the company expects to take advantage of the service-oriented nature of its grid by moving various services onto it. Grids, Mitsolides believes, are excellent in terms of fault tolerance, distribution and monitoring. “Eventually,” he added, “we’re expecting to have a utility grid that people use for compute- or non-compute-intensive service globally.” Lehman Brothers might even institute a system to support bidding for resources, where the highest-priority requests get first dibs on available CPU cycles and the department is charged accordingly for them.

But all of this is a few years down the road, and in the world of corporate IT, it often is what’s happening now that’s important. In the case of Lehman Brothers, Mitsolides acknowledges that while the company has grand plans for its grid, it is starting slowly and not taking full advantage of Platform’s Symphony software, but that doesn’t mean the company isn’t seeing results already. Company-wide, those utilizing the grids are happy to be free of the burden of manually automating memory resources based on job size. In the corporate credit department, Mitsolides said benefits are being reaped during the start-up process of a particular application that requires loading gigabytes of data in advance, and Symphony also is proving its worth for “priming” certain services and keeping them up, running and waiting for work.

Results have been no less impressive within Mitsolides’ own fixed income group, where he said the big initial benefit came in terms of scalability. His group’s old software ran into problems when dealing with more than 100 nodes, each comprised of a quad-core processor to total 400 CPUs. “With Symphony,” said Mitsolides, “we were able to merge all of our sub-grids into one, which certainly simplified the scheduling of the batches for us,” adding that he’s now looking at about 1,500 CPUs in a single grid.

Currently, Lehman Brothers’ grid installments primarily are being used to run derivatives, mortgage and corporate credit risk applications.

Obvious benefits aside, however, Mitsolides and his comrades on Lehman Brothers’ grid committee did encounter some resistance at the start — a situation not uncommon in such large grid deployments. Initially, he explained, people were wondering why they would need help with tasks at which they already were skilled, but when they saw other individuals and groups, equally skilled in their own respective tasks, using the grid to tackle real business problems instead of technical problems and doing much more advanced types of work, the skeptics starting asking to be put on the grid themselves. And if Lehman Brothers’ grid plans go the way the company hopes (see above), it is the solving of business problems across the board that really will signify success.

“It’s not just about everyone being on a single grid and being able to share hardware, and it is not only about improving utilization,” stated Mitsolides. “It is also not only about having one standard mechanism for distributing services. One of the biggest benefits, which will probably come last, is that as we have everyone distributing work and services using the same exact API … we expect that people are going to do much more sharing of functionality.”

Platform’s Future Vision

Platform CEO Zhou sees Lehman Brothers’ goal of distributing business critical applications and services across a company-wide grid as part of a trend not only in the financial services market, but across all vertical markets. Of course, that’s not to say it’s going to happen overnight, and you can bet the folks at Platform had this longer-term vision in mind when they decided now would be a good time to target financial services. Zhou said a lot of work in the market is still being done around data-intensive analytics applications, but “there are many, many other applications in the datacenters of financial services companies that can really benefit from grid computing or from virtualization.”

He believes it will take years for this evolution to reach its peak across the enterprise, with the current state of adoption of grid for business applications in financial services being the same as it was for analytics applications three to fours years ago. In the spirit of being ahead of the curve, though, Platform has been focusing on this evolution for the past four years via its work with the SAS Institute. The two combined to create SAS Grid Manager, which allows SAS applications to be run across a cluster versus the traditional SMP server, and in the last six months, said Zhou, the companies have seen success stories across numerous markets.

Essentially, as illustrated by the global, service-oriented plans of Lehman Brothers, the grid market is beginning to flatten out and expand horizontally, with companies intent not only on solving HPC issues, but also on transforming their datacenters into more agile, adaptive and virtualized platforms. Zhou certainly is keen to these changes, and he expects his company to evolve with his customers. “I think we are just going through that transformation,” he said, “from a vertically oriented grid computing software company focused on target application workload to a more horizontal systems infrastructure company as a component of the overall IT infrastructure in the datacenter.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire