Visit additional Tabor Communication Publications
November 24, 2010
Here is a collection of highlights from this week's news stream as reported by HPCwire.
Awards for Outstanding High Performance Computing Achievements Presented at SC10
TACC's Student Cluster Challenge Team Wins Highest Linpack Award at SC10
Earthquake Simulation Breaks Computational Records, Promises Better Quake Models
ScaleMP and HP Deliver Record-Breaking SMP
Fibre Channel Industry Association Validates FCoE and 8GFC Interoperability
Penguin Computing Releases Latest Scyld ClusterWare Enhancement: Scyld Insight
Molecular Simulations Confirm Role of Functional Rotation in Multidrug Resistance
Supercomputing Points Way to Sun-Resistant Plastics
iray Renderer by mental images Supports 3D Product Design in Dassault Systèmes CATIA V6
Stockton Computational Science Program Earns Top National Honors at SC10
Blood Simulation on Jaguar Takes Gordon Bell Prize
Red Bull Racing Wins F1 Drivers' Championship, Powered by Platform Computing
GeneGo Announces Integration with Agilent's GeneSpring Bioinformatics Solution
CSC Selected to Support FAA NextGen Initiative
Exascale Is the New Petascale
Announced Monday, after the big bash in New Orleans -- you know the one -- was the launch of the Exascale Technology and Computing Institute (ETCi) at the U.S. Department of Energy's (DOE) Argonne National Laboratory, with Pete Beckman at the helm. The stated goal of ETCi is to "focus on developing exascale computing to extend scientific discovery and solve critical science and engineering problems." Exascale machines are desired for their ability to run sophisticated simulations, for example, large-scale climate change scenarios, that aren't possible with today's systems.
With the crossing of the petascale barrier barely in the rearview mirror since IBM Roadrunner's big accomplishment in 2008, it's now all about exascale, designing computers one thousand times faster than today's petascale machines, and, oh yeah, configuring the software to run on them. Exascale machines will have at least an exaflops of power -- that is a quintillion, or one million trillion, floating point operations per second. On the one hand, getting to exascale is quite doable using brute force: connect enough superfast cores with some tweaks here and there and tada! But that's probably not the best way and may not be useful for real-world applications. That's why doing it right will require some fresh ideas, innovative technologies, and experienced talent.
Pete Beckman, the man tasked with leading this new undertaking, highlighted this point:
"Supercomputing architectures are rapidly changing. New technology will necessitate transforming system software and applications to enable new scientific discovery at extreme scales. By using principles of co-design, computer scientists and applied mathematicians, industrial partners, and the scientists using today's supercomputers can work together to make exascale computing a reality."
With China currently in the lead of the worldwide supercomputing race, the US should be all the more motivated in winning the race to exascale.
NCSA Set to Deploy IBM's GPFS File System
Citing "streamlined data storage" as a main benefit, the National Center for Supercomputing Applications (NCSA) announced Friday that it will soon be implementing IBM's General Parallel File System (GPFS) across all its supercomputing systems, including the highly-anticipated Blue Waters System. IBM's GPFS, which stands for General Parallel File System, is geared for high-performance, scalable clustered file management. In addition to simplifying cluster file system administration, it is expected to provide reliable, concurrent high-speed file access to applications running on multiple nodes of clusters. With tools capable of managing petabytes of data and billions of files, it is a good fit for the sustained-petaflop Blue Waters system.
Bill Kramer, the deputy project director for Blue Waters, remarked on the launch:
"A high-performance, parallel, facility-wide file system has been our vision for a long time. This is a fundamental enabler of future data-focused activities at NCSA and Illinois. This allows us to be at the forefront of data-intensive science."
According to an NCSA Web document, the center has been testing out several parallel file systems. In addition to GPFS, NCSA also tried out the Linux Cluster File System (LUSTRE) and SGI's Cluster CXFS File System.
Michelle Butler, leader of NCSA's Storage Enabling Technologies, provided more details on the selection process, citing cost as a factor:
"In the past, the options for file systems and support have been costly or have required full-time on-staff experts. The GPFS multi-system offering allows NCSA and Illinois to use one of the best file systems in the world today at reasonable cost on all clusters, promoting shared file systems such as the one NCSA will provide across all its compute platforms."
With all the attention being given to LUSTRE lately, it struck me as somewhat ironic that NCSA went with GPFS, although as with most big academic institutions the decicion process was most likely made over a long period of time. And Bill Kramer, the deputy project director for Blue Waters, pointed out that compatibility with Blue Waters was their primary objective:
"The driving factor for this agreement was the Blue Waters system. It provided the critical mass to make such an novel agreement between Illinois and IBM attractive in a cost-effective manner. We are the first institution to reach an agreement with IBM to do this with all machines across all architectures at full scale."
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.