Visit additional Tabor Communication Publications
June 01, 2010
Integration of Voltaire UFM software and Platform LSF enables more efficient management and operations of scale-out HPC centers and cloud implementations
HAMBURG, June 1 -- Voltaire Ltd., a leading provider of scale-out datacenter fabrics, and Platform Computing, the leader in cluster, grid and cloud management software, today announced a partnership to deliver optimized and automated management of datacenter resources, in virtualized and cloud computing environments.
Through the integration of Voltaire Unified Fabric Manager (UFM) software and Platform LSF, intelligent workload scheduling and resource allocation extends from the application layer all the way down to the network fabric layer. The combined solution enables high performance computing, datacenter and cloud computing facilities to more easily support constantly changing resource requirements and workloads inherent in those environments.
"HPC is a promising early adopter market for cloud computing, with some substantial cloud initiatives under way, mostly on the private cloud side," said Steve Conway, research vice president in IDC's High Performance Computing group. "The partnership between Platform and Voltaire aims to provide organizations with the ability to optimize HPC data resources and manage intensive applications across virtualized and cloud computing environments."
"The combination of Platform LSF and Voltaire UFM software over InfiniBand enables our mutual customers to experience new levels of performance and efficiency for managing HPC datacenter virtualization and cloud computing technologies," said Tripp Purvis, vice president, business development, Platform Computing. "The integration of Platform LSF with Voltaire UFM software extends Platform LSF's ability to optimize the allocation of resources in the datacenter across the entire network for more efficient IT operations."
"The ability to automate resources becomes increasingly critical in scale-out and cloud computing environments," said Asaf Somekh, vice president of marketing, Voltaire. "Combining the intelligence of Voltaire's UFM software with the dynamic provisioning capabilities of Platform LSF, enables customers to fully automate network configuration management and optimize the network. This meets the heavy needs of HPC datacenters and cloud computing environments that may be running tens to hundreds of applications."
More information about Voltaire UFM software is available at www.voltaire.com/UFM.
More information about Platform LSF is available at http://www.platform.com/workload-management/high-performance-computing.
The integrated Voltaire-Platform solution will be available in July through Voltaire and Platform resellers and channel partners.
About Platform Computing
Platform Computing is the leader in cluster, grid and cloud management software -- serving more than 2,000 of the world's most demanding organizations. For 17 years, Platform's workload and resource management solutions have delivered IT responsiveness and lower costs for enterprise and HPC applications. Platform has strategic relationships with Cray, Dell, HP, IBM, Intel, Microsoft, Red Hat, and SAS. Visit www.platform.com.
Voltaire (NASDAQ: VOLT) is a leading provider of scale-out computing fabrics for datacenters, high performance computing and cloud environments. Voltaire's family of server and storage fabric switches and advanced management software improve performance of mission-critical applications, increase efficiency and reduce costs through infrastructure consolidation and lower power consumption. Used by more than 30 percent of the Fortune 100 and other premier organizations across many industries, including many of the TOP500 supercomputers, Voltaire products are included in server and blade offerings from Bull, HP, IBM, NEC, SGI and Sun. Founded in 1997, Voltaire is headquartered in Ra'anana, Israel, and Chelmsford, Mass. More information is available at www.voltaire.com or by calling 1-800-865-8247.
Source: Platform Computing Corp.; Voltaire Ltd.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.