Here is a collection of highlights from this week’s news stream as reported by HPCwire.
Awards for Outstanding High Performance Computing Achievements Presented at SC10
TACC’s Student Cluster Challenge Team Wins Highest Linpack Award at SC10
Earthquake Simulation Breaks Computational Records, Promises Better Quake Models
ScaleMP and HP Deliver Record-Breaking SMP
Fibre Channel Industry Association Validates FCoE and 8GFC Interoperability
Penguin Computing Releases Latest Scyld ClusterWare Enhancement: Scyld Insight
Molecular Simulations Confirm Role of Functional Rotation in Multidrug Resistance
Supercomputing Points Way to Sun-Resistant Plastics
iray Renderer by mental images Supports 3D Product Design in Dassault Systèmes CATIA V6
Stockton Computational Science Program Earns Top National Honors at SC10
Blood Simulation on Jaguar Takes Gordon Bell Prize
Red Bull Racing Wins F1 Drivers’ Championship, Powered by Platform Computing
GeneGo Announces Integration with Agilent’s GeneSpring Bioinformatics Solution
CSC Selected to Support FAA NextGen Initiative
Exascale Is the New Petascale
Announced Monday, after the big bash in New Orleans – you know the one – was the launch of the Exascale Technology and Computing Institute (ETCi) at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, with Pete Beckman at the helm. The stated goal of ETCi is to “focus on developing exascale computing to extend scientific discovery and solve critical science and engineering problems.” Exascale machines are desired for their ability to run sophisticated simulations, for example, large-scale climate change scenarios, that aren’t possible with today’s systems.
With the crossing of the petascale barrier barely in the rearview mirror since IBM Roadrunner’s big accomplishment in 2008, it’s now all about exascale, designing computers one thousand times faster than today’s petascale machines, and, oh yeah, configuring the software to run on them. Exascale machines will have at least an exaflops of power — that is a quintillion, or one million trillion, floating point operations per second. On the one hand, getting to exascale is quite doable using brute force: connect enough superfast cores with some tweaks here and there and tada! But that’s probably not the best way and may not be useful for real-world applications. That’s why doing it right will require some fresh ideas, innovative technologies, and experienced talent.
Pete Beckman, the man tasked with leading this new undertaking, highlighted this point:
“Supercomputing architectures are rapidly changing. New technology will necessitate transforming system software and applications to enable new scientific discovery at extreme scales. By using principles of co-design, computer scientists and applied mathematicians, industrial partners, and the scientists using today’s supercomputers can work together to make exascale computing a reality.”
With China currently in the lead of the worldwide supercomputing race, the US should be all the more motivated in winning the race to exascale.
NCSA Set to Deploy IBM’s GPFS File System
Citing “streamlined data storage” as a main benefit, the National Center for Supercomputing Applications (NCSA) announced Friday that it will soon be implementing IBM’s General Parallel File System (GPFS) across all its supercomputing systems, including the highly-anticipated Blue Waters System. IBM’s GPFS, which stands for General Parallel File System, is geared for high-performance, scalable clustered file management. In addition to simplifying cluster file system administration, it is expected to provide reliable, concurrent high-speed file access to applications running on multiple nodes of clusters. With tools capable of managing petabytes of data and billions of files, it is a good fit for the sustained-petaflop Blue Waters system.
Bill Kramer, the deputy project director for Blue Waters, remarked on the launch:
“A high-performance, parallel, facility-wide file system has been our vision for a long time. This is a fundamental enabler of future data-focused activities at NCSA and Illinois. This allows us to be at the forefront of data-intensive science.”
According to an NCSA Web document, the center has been testing out several parallel file systems. In addition to GPFS, NCSA also tried out the Linux Cluster File System (LUSTRE) and SGI’s Cluster CXFS File System.
Michelle Butler, leader of NCSA’s Storage Enabling Technologies, provided more details on the selection process, citing cost as a factor:
“In the past, the options for file systems and support have been costly or have required full-time on-staff experts. The GPFS multi-system offering allows NCSA and Illinois to use one of the best file systems in the world today at reasonable cost on all clusters, promoting shared file systems such as the one NCSA will provide across all its compute platforms.”
With all the attention being given to LUSTRE lately, it struck me as somewhat ironic that NCSA went with GPFS, although as with most big academic institutions the decicion process was most likely made over a long period of time. And Bill Kramer, the deputy project director for Blue Waters, pointed out that compatibility with Blue Waters was their primary objective:
“The driving factor for this agreement was the Blue Waters system. It provided the critical mass to make such an novel agreement between Illinois and IBM attractive in a cost-effective manner. We are the first institution to reach an agreement with IBM to do this with all machines across all architectures at full scale.”