Visit additional Tabor Communication Publications
November 15, 2010
New features in flagship workload management solution simplify administration and boost user productivity with even higher resource utilization and scalability
NEW ORLEANS, Nov. 15 -- Platform Computing, the leader in cluster, grid and cloud management software, today announced the availability of the latest version of its flagship product family, Platform LSF, the industry's most comprehensive workload scheduling solution for high performance computing (HPC). Designed to handle complex, distributed HPC environments, Platform LSF maximizes existing IT infrastructures by allowing more work to be done with fewer computing resources in the fastest time possible. Version 8 boasts a number of new features including the ability to delegate administrative rights to line of business managers; live, dynamic cluster reconfiguration; guaranteed resources to ensure service level agreements (SLAs) are met; flexible fairshare scheduling policies; and unparalleled scalability to support the large clusters in use today.
"As organizations process larger data volumes at increasingly efficient rates, there is an even greater need for workload management solutions designed for powerful, efficient operation, such as Platform LSF," said Steve Conway, IDC research vice president, High Performance Computing. "IDC forecasts that the HPC management software market will continue to experience healthy growth over the next five years to address the challenges of larger scale and newer environments."
"With Platform LSF we can intelligently schedule workload, ensuring that we get the maximum return on our significant investment in simulation tools and hardware," said Nathan Sykes, head of CFD, Red Bull Racing. "We partnered with Platform Computing to deploy a complete software management solution and, as a result, benefitted from a 20 percent improvement in the throughput of our engineering analysis. Platform Computing has helped boost the performance of our cars, resulting in a number of significant wins, including the recent 2010 Constructor's Championship at the Brazilian Grand Prix. With the new capabilities in Platform LSF 8 we can further improve our throughput, increasing the number of virtual engineering simulations we can complete, which will enable us to continue building the fastest and most aerodynamically efficient Formula 1 cars possible."
"Maximizing computing resource utilization has posed a constant challenge for IT departments serving the needs of HPC users. Platform LSF 8 solves this problem by providing more intelligent workload scheduling and letting IT managers impartially delegate administrative rights to the appropriate users. With these new HPC management features, Platform Computing continues to lead the market in providing workload solutions that maximize HPC computing efficiency and user productivity while cutting job completion time for complex computing projects," said Ken Hertzler, Vice President, Product Management, Platform Computing.
o Guaranteed Resources – Ensures resources are fairly allocated and properly designated to user groups or specific jobs. Because of the limitations of other schedulers many customers end up creating resource silos and overly complex scheduling policies. Platform LSF resolves these inefficiencies by guaranteeing resources according to service level objectives.
o Delegation of Administrative Rights – Provides flexible administrative rights designation throughout multiple levels of an organization. This empowers line of business managers to control project workload. As a result, the LSF administrator is freed from managing and monitoring internal project priorities and membership changes.
o Fairshare and Pre-Emptive Scheduling Policies – Includes exceptional scheduling flexibility, enabling administrators to fine tune production policies and share definitions at both the queue and global levels. Using Platform LSF, administrators can stipulate when to pre-empt jobs and for what length of time while still ensuring SLAs are met.
o Live Resource Reconfiguration – Extends administrative flexibility by allowing common configuration changes to be applied without reconfiguration or restart. Historical changes are logged, making it easy to track or reset configurations as necessary. With live reconfiguration, down-time is reduced, and administrators are free to quickly make the adjustments needed rather than wait for scheduled maintenance periods or non-peak hours.
o Integrated Application Support – Scripting guidelines and application templates simplify job submission, reduce setup time and minimize operational errors. The web-based interface in Platform Application Center 8 enables remote job monitoring, easy access to job related data and the capability to easily perform basic operations like stopping, suspending, resuming or re-queuing jobs through a web browser.
o Unparalleled Scalability – Platform LSF can be extended to an unparalleled scale of up to 100,000 cores and 1.5 million queued jobs for very high throughput EDA workloads and even higher for more traditional HPC workloads.
Platform LSF will be available in January 2011.
About Platform Computing
Platform Computing is the leader in cluster, grid and cloud management software -- serving more than 2,000 of the world's most demanding organizations. For 18 years, our workload and resource management solutions have delivered IT responsiveness and lower costs for enterprise and HPC applications. Platform has strategic relationships with Cray, Dell, HP, IBM, Intel, Microsoft, Red Hat, and SAS. Visit www.platform.com.
Source Platform Computing Corp.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.