Visit additional Tabor Communication Publications
June 29, 2009
"We've come a long way," said John Towns, chair of TeraGrid's leadership group, the TeraGrid Forum, describing "The State of the TeraGrid" on Wednesday afternoon. Into its fifth year as a production resource, TeraGrid remains what it set out to be: deep, wide and open.
These three simple words have endured as shorthand for TeraGrid's enabling vision: deep — to provide powerful computational resources that enable research that can't otherwise be accomplished; wide — to grow the community of computational science and make the resources easily accessible; open — to connect with new resources and institutions.
Towns noted that these high-level objectives guide TeraGrid's annual planning, which includes extensive input from user communities, and review from TeraGrid's Science Advisory Board (SAB), which in its most recent report (April 2009) said, among other things: "TG has been instrumental in the discovery of new science that has required the most advanced hardware capabilities as well as the human expertise to utilize those capabilities effectively."
It's been helpful, said Towns, to think of TeraGrid as a "social experiment" — an organization that brings together 11 computational research centers across the country as resource providers and that serves researchers across the diverse spectrum of NSF-supported work is, to say the least, unique, and the management structure has evolved as the organization has established its staying power. The SAB review commented on the effectiveness of TeraGrid management in gluing together diverse entities.
By quantified measures, TeraGrid has grown significantly over the past year as new NSF-funded resources — notably the Ranger and Kraken systems — have come online. This is illustrated dramatically with the statistic that during the last quarter of 2008 TeraGrid delivered more computer cycles than during all of 2007. This sharp growth in usage is accompanied by continued growth in the number of new users, and is reflective of changes that have streamlined the allocations process.
With expansion of computational capacity, data transfer stands out as a challenge, as Towns acknowledged in the question period. New technology plans include further work toward a wide-area global file system, with Lustre-WAN becoming the focus of TeraGrid's effort in this area — to provide a single file system accessible from all TeraGrid resources.
by Michael Schneider, Pittsburgh Supercomputing Center
Posted by Debbie Walsh - June 29, 2009 @ 10:25 AM, Pacific Daylight Time
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.