Visit additional Tabor Communication Publications
November 09, 2010
LONDON, Nov. 9 -- Platform Computing, the leader in cluster, grid and cloud management software, today announced the availability of the latest version of Platform HPC, a complete high performance computing (HPC) management product. Platform HPC makes it easy for technical application users to harness the power and scalability of HPC clusters. Based on the industry's leading workload scheduler, Platform LSF, it features robust, user-friendly cluster provisioning, workload management, reporting and monitoring, MPI libraries and more, all of which are accessible via a web user interface. The web user interface also simplifies the application integration process so that customers can focus on their work, instead of managing their cluster.
This is more than just a stack of software; Platform HPC is a fully integrated, certified and supported product designed to ensure ease of use and simplified management. With the release of Platform HPC, Platform is also further investing in its channel strategy, with a focus on the technical application market and the independent software vendors (ISVs) that serve it.
"The market for HPC application software and middleware exceeded $2.3 billion in 2009, and we expect it to approach $3 billion in 2013," said Earl Joseph, IDC programme vice president for HPC. "HPC management software will become even more crucial for efficiently exploiting increasingly large, heterogeneous systems and newer environments such as grid and cloud computing. Platform Computing is well positioned to benefit from this trend."
"GreenPlanet HPCC leverages Platform Computing's software to assist interdisciplinary scientific collaborators across process intensive computing projects, such as climate modelling simulations done by our Earth Systems Science researchers, to predict worldwide weather patterns," said Ronald D. Hubbard, executive director, GreenPlanet HPCC at the University of California, Irvine School of Physical Sciences. "We place tremendous added value in having a robust and complete cluster solution that ensures our scientists can administer the system independently while accelerating their calculation times so that their research can be performed in a more productive, efficient, cost-effective manner."
"Dell has worked closely with Platform Computing for many years in order to deliver complete, integrated cluster solutions that are optimised and customised to our customers' unique requirements and environments," Donnie Bell, senior manager, high performance computing solutions, Dell Inc. "Platform HPC software coupled with Dell PowerEdge solutions provide a robust management infrastructure designed to enable faster results for a wide range of workloads conducted across the scientific and research communities."
"Platform HPC removes the complexity and cost of implementing HPC clusters, which enables our customers to leverage the sophisticated simulation data and process management capabilities of SimManager to perform larger and more complex simulations and bring products to market faster," said John Janevic, vice president of strategic operations, MSC Software.
"In our 18 years leading the HPC industry, we've seen a growing number of HPC applications being deployed by technical users across a more diverse set of industries," said Tripp Purvis, vice president, business development, Platform Computing. "With Platform HPC we're removing the complexity and cost of implementing HPC cluster solutions. Platform is leading the market by bringing the easiest, most complete and cost-effective HPC management software available to technical application users, allowing them to focus on their work, not managing their clusters."
Organisations across a wide range of industries are investing more heavily in HPC resources and application development to improve business productivity, speed up research projects and reduce infrastructure costs. Whether they are working on computer generated designs for automotive parts, scientific algorithms on global warming estimates or computer graphics rendering for film production, many engineers, scientists and designers are working with technical applications but struggling with poor application and workstation performance. Target vertical industries for Platform HPC include manufacturing, oil and gas, energy, life sciences, media/digital content creation, higher education and research and government.
With Platform HPC, both experienced and novice HPC users are able to quickly and easily deploy, run and manage their HPC clusters while meeting the most demanding requirements for application performance and predictable workload management. They benefit from a mature, robust set of proven cluster management capabilities that are accessible through a unified portal interface and include a simplified application integration process. Platform HPC is the industry's most complete cluster software solution, allowing users to better concentrate on research or designs rather than managing complex computing environments. New enhancements to Platform HPC include:
Platform HPC is the only complete HPC management product with a single installer. It provides simplified cluster configuration and application integration and is pre-certified by leading hardware and software vendors. With the debut of this product, Platform's channel strategy positions the product for rapid expansion via Platform's extensive network of channel distributors and ISV partners.
Platform HPC is available immediately and can be purchased from one of Platform Computing's certified channel partners: http://www.platform.com/partners/directory.
About Platform Computing
Platform Computing is the leader in cluster, grid and cloud management software -- serving more than 2,000 of the world's most demanding organiSations. For 18 years, our workload and resource management solutions have delivered IT responsiveness and lower costs for enterprise and HPC applications. Platform has strategic relationships with Cray, Dell, HP, IBM, Intel, Microsoft, Red Hat, and SAS. Visit www.platform.com.
Source: Platform Computing
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.