Visit additional Tabor Communication Publications
April 16, 2012
FREMONT, Calif., April 16 -- SGI, the trusted leader in technical computing, today announced an expansion of its long-term partnership with Altair Engineering, Inc. SGI is collaborating with Altair to develop scheduling and systems management solutions to support energy efficient computing at exascale levels. As part of the expanded relationship, SGI has entered into an original equipment manufacturer (OEM) agreement with Altair that names the company its preferred workload management supplier. The agreement makes it possible for SGI to more cost-effectively integrate Altair's PBS Professional® scheduler on SGI HPC systems, ensuring costs remain favorable as system sizes and CPU-core counts continue to increase.
"SGI and Altair are committed to delivering energy-efficient and affordable computing resources to the high-performance computing community," said Bill Nitzberg, chief technology officer for PBS Works at Altair. "With this project, we have taken a lead role in providing the enabling capabilities for power management in the age of exascale computing."
SGI Management Center is a comprehensive operational management application for technical computing, and is part of a suite of SGI software products that help companies optimize large scale system power usage and consumption effectively and efficiently, simplifying energy information into meaningful decision points. As part of this agreement, SGI Management Center will be more tightly integrated with Altair's PBS Professional for more optimal system performance and scheduling of resources.
"Exascale is forcing a redefinition of the way customers approach computing and systems management," said Christian Tanasescu, vice president of software product engineering at SGI. "Companies won't be able to simply apply or extrapolate the work done at petascale levels to maintain performance once they move to the exascale range, and this agreement allows us to tailor solutions for our customers that allow them to scale their workloads efficiently and cost-effectively."
The goal of the collaboration is for SGI and Altair to deliver a set of software to efficiently manage large scale systems for performance, resilience, and power optimization. With this software, capacity planning can be based on available power, scaling to meet the changing power requirements of a specific job or an entire workload. Focus areas for the project include workload and application profiling, and power rate and load scheduling. The solution will enable intelligent scheduling based on power consumption, so users can run jobs on the most energy-efficient resource for a given application, or apply policies that allow them to schedule jobs when energy rates are at their lowest.
Altair Engineering, Inc., empowers client innovation and decision-making through software that optimizes the analysis, management and visualization of business and engineering information. Privately held with more than 1,500 employees, Altair has offices throughout North America, South America, Europe and Asia/Pacific. To learn more, visit www.pbsworks.com or www.altair.com
SGI, the trusted leader in technical computing, is focused on helping customers solve their most demanding business and technology challenges. Visit sgi.com for more information.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.