Visit additional Tabor Communication Publications
September 27, 2010
Benchmark at IBM's Test Centre proves massive time savings when calculating risk
LONDON, Sept. 27 -- Sophis, a leading provider of cross-asset, front-to-back portfolio and risk management solutions, today announced a significant partnership with Platform Computing, the leader in cluster, grid and cloud management software for the financial services industry. Sophis is now offering an integrated solution with Platform Symphony for users in the banking, insurance and investment management sectors to distribute resource-hungry calculations such as P&L and sensitivities calculations, instrument pricing, risk simulations and value at risk (VaR).
Sophis and Platform carried out a benchmark test on their joint solution at IBM's Product and Solution Support Centre (PSSC) in Montpellier, France, during July 2010 on IBM's latest powerful hardware: IBM Power 750, IBM XIV Storage System, IBM System x3850 M2 and IBM BladeCenter HS22.
Using OTC structured products, the test ran a historical VaR calculation on a multi-asset portfolio with 32,000 positions, which was representative of a true portfolio. The test, computing VaR calculations with 270 historical scenarios, took four hours using less than 300 nodes and less than 90 minutes using slightly over 800 nodes with no apparent limitation and linear scalability.
Samer Ballouk, head of product management and business development at Sophis, said: "The results of this benchmark with Platform Computing are very good news for our customers, who have increasingly demanding risk management and portfolio valuation requirements. By speeding up calculations using a grid approach, they can introduce an intra-day VaR calculation, for example, and comply with the latest guidance on risk management and reporting."
Tripp Purvis, vice president of business development at Platform Computing, said: "Calculating risk is one of the most critical processes at any financial institution. Our partnership with Sophis will allow financial services companies that use our integrated solution to run simulations and complete analyses in a timely manner. In addition, the benchmark results show that users will benefit from the ability to distribute workloads across a grid infrastructure for maximum resource use."
Founded in 1985, Sophis is a leading provider of cross-asset portfolio and risk management solutions for capital markets, investment managers, corporate and insurance companies. The company has a global presence with offices around the world. Sophis serves over 6,000 users in 130 market-leading institutions, including investment banks, asset managers, hedge funds and insurance companies with its three solutions, RISQUE, dedicated to the sell-side, and VALUE and iSophis dedicated to the buy-side. In July 2007, the Private Equity fund Advent International acquired a majority stake in Sophis. www.sophis.com.
About Platform Computing
Platform Computing is the leader in cluster, grid and cloud management software -- serving more than 2,000 of the world's most demanding organizations. For 18 years, our workload and resource management solutions have delivered IT responsiveness and lower costs for enterprise and HPC applications. Platform has strategic relationships with Cray, Dell, HP, IBM, Intel, Microsoft, Red Hat, and SAS. www.platform.com.
Source: Platform Computing; Sophis
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.