Visit additional Tabor Communication Publications
March 16, 2009
When Platform Computing formed its financial services business unit in May 2008, it probably didn't know that within six months the financial industry would begin to implode, bringing much of the global economy down with it. Now that some of the biggest investment banks are just scrambling for survival, will Platform's new focus on Wall Street pay off?
According to Jim Mancuso, Platform's general manager of the new business unit, that answer is yes. "It's not to say that everything is perfect and rosy and we haven't been impacted," he says, "but it is to say our business is still growing and we actually have had some really good results in the last quarter and moving forward."
Unless bartering becomes the new model for financial transactions, investment banks, insurance companies, and hedge fund firms are still going to be running HPC applications to keep their financial services running smoothly. Of course, demand for some of the more creative investment vehicles like the infamous credit default swaps and subprime collateralized debt obligations, is going to fall by the wayside, but investing won't evaporate and neither will the need for financial analytics.
Mancuso sees plenty of opportunities to sell Symphony, the company's grid middleware software aimed at the financial sector. From his perspective, what Platform does for a living -- namely speeding up performance and increasing hardware utilization -- enables companies to do more work for less money. "It's very much in line with why people buy anything today," he says.
One of the things Platform has going for it, even as IT budgets get scaled back, is that grid middleware enables Wall Street firms to derive more compute capacity out of their existing resources. Symphony is a workload scheduler that can be used for an array of financial analytics applications. Unlike Platform's batch-oriented LSF offering, Symphony offers a service-oriented architecture (SOA) model that the financial industry has come to accept as a more natural way to deploy services. The middleware is designed to spread compute-intensive workloads like real-time pricing, value-at-risk, and Monte Carlo simulations across a compute grid. But really, almost any type of financial application -- risk, pricing, indexing and analytics -- is fair game here.
"You'll even see large Excel spreadsheets running under Symphony," notes Mancuso. He's seen some really large spreadsheets that were taking three days to run on individual servers. By distributing the calculations across a server grid, execution times were reduced to three hours.
The company's decision to form an independent business unit for financial services was a result of the importance of this particular vertical market to the company. According to Mancuso, the financial services sector is the number one growth vertical for Platform. Even in times like these, when every week seems to serve up another helping of bad economic news, Mancuso reports that he continues to see a lot of demand for services in the financial industry. And the company is even looking to expand its reach beyond some of the big investment banks into firms managing hedge funds, mutual funds, pension funds, and insurance.
As part of the renewed commitment to the financial sector, Platform brought David Warm on board as the CTO of the new business unit. Warm, who joined the company in December, is a Wall Street veteran with over 20 years of experience in the industry, including stints with Merrill Lynch, Goldman Sachs, and JP Morgan Chase. Much of his work on Wall Street, was in the area of client service delivery, essentially designing and packaging infrastructure solutions for these firms. As such, Warm brings street smarts to Platform's financial group and helps fill a gap in subject matter expertise.
Platform's latest release of Symphony (version 4.1) reflects some of the ways the company is continuing to refine its approach to grid-friendly financial analytics. The updated middleware has a number of scalability and security enhancements, but the main additions are centered on improving data management. Since the size and unwieldiness of market data can easily slow down financial apps, optimizing the way bytes are sent through the grid's network can produce some significant speed-ups. The whole idea, says Warm, is to squeeze as much of the latency out of data I/O as possible.
For example, the newest Symphony version adds something called Direct Data Transfer (DDT), which allows the client's application data to be accessed directly by the service, without having to be sent through the middleware itself. "You can carve off additional milliseconds of time," explains Warm, "which may not sound significant, but when you're running tens of thousands of large tasks on a grid, it can be very significant." He says they've seen 50 percent savings in data transfer times using the DDT facility, and as data sizes grow, those savings would get even larger.
Another feature they've added is data compression, so that when data is sent across the network, it can take place in a fraction of the time of a raw transfer. Warm says the compression/de-compression operation takes a certain amount of time, but the overhead is relatively low even for relatively small parcels of data, and as the size of the data grows into the tens of gigabytes range, it becomes even less significant.
The third new feature is a facility that automates the synchronization of common data used across multiple nodes. Basically you just tell the Symphony scheduler what constitutes the common data, and the middleware automatically maintains data coherence across the grid. If one node changes the data, the scheduler makes sure all the other nodes have the updated version.
Beyond the Symphony product, the financial group has also been developing a cloud offering. The product Platform has in mind is aimed at private clouds, that is, internal clouds that aggregate a financial institution's IT hardware and software resources into essentially a company-wide grid. Mancuso says the new product is a natural extension of what they're doing with Symphony, since it's just another level of application virtualization. He predicts the cloud product will show up sometime around the middle of 2009.
Mancuso thinks the jump from private clouds to public clouds will occur later, after existing datacenters are tapped out. In the financial services industry, there is also the additional concern about privacy and security. It would make a lot of investors nervous to have their financial portfolios sitting on Amazon's EC2 disk. And the Wall Street firms themselves aren't exactly comfortable with the idea of exposing their own analytics software and investment models on public systems. So for the time being, Mancuso's business unit has focused its efforts on internal clouds.
Platform's move into the cloud opens up some opportunities outside of its HPC/grid computing roots. Warm points out that in the Wall Street firms he's been involved with, only about 15-20 percent of the infrastructure is dedicated to HPC applications. The rest of the infrastructure is used for Web serving, database processing, and other types of applications. "That's why we look at cloud as a huge growth area for us," he explains. "Now we can potentially tap the other 80 percent of the datacenter for our products."
From a financial firm's point of view, private clouds offer the promise of delivering a lot better utilization of existing hardware. Speaking again from his own experience, Warm says peak server use in Wall Street datacenters might only be about 30 percent, so there's plenty of capacity going to waste. At the same time, unifying all equipment and software under one system gives companies a lot more flexibility on how and when to run their applications. For these reasons, Platform is betting the time is now right for companies to transition from simple grids into private clouds.
"At the end of the day, the organizational mentality and culture at these companies is what has to change to adopt cloud," says Warm. "Grid and clusters were the first place where the silo mentality started to break down. They learned that they could share this platform across many lines of business and have their service level guaranteed. If cloud is to be successful it's not enough just to have a cute interface and a nifty little cloud logo on your software. You're going to have to prove that your software is going to be able to support the service levels as well as drive down costs and increase utilization."
That's not to say grid middleware is going to be mothballed. Mancuso says Symphony will continue to evolve in parallel with the upcoming cloud product. But, he adds, Platform is devoting a lot of resources into cloud middleware development right now. "In large part, we believe it to be the future of the company."
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.