Visit additional Tabor Communication Publications
August 25, 2010
Aug. 25 -- With twin awards from the National Science Foundation (NSF) totaling $3.4 million, the University of Tennessee-managed National Institute for Computational Sciences (NICS) will add 300 teraflops to the TeraGrid's total computational capability.
Researchers will also have access to more than 200 million additional service units, or CPU hours, per year, bringing the total available from NICS to over 800 million and benefitting the organization's entire user community.
The first part of the award will increase the size of Kraken, the first academic petaflop computer and currently the world's fourth fastest machine, by 12 cabinets, adding 144 teraflops of computing power.
The Cray XT5 will now total 100 cabinets and provide 1.17 petaflops of computing capability and 147 terabytes of memory. While Kraken is an ideal resource for running some of the world's most computationally demanding simulations, the new cabinets will also assist the myriad of smaller jobs continually running on NICS's flagship system.
"We are extremely pleased to be able to put more continually available resources at the disposal of researchers with smaller codes, while still supporting the very largest applications," said NICS Director Phil Andrews. "The importance of a research activity cannot be defined by the size of the code involved, and we want to give all NICS users the best possible service."
Although Kraken is the only resource in the NSF's computing portfolio capable of running simulations at its full potential of 8,256 nodes, it also is a massive capacity resource.
Many of the more computationally demanding codes running on Kraken use the "sweet spot" of 8,192 nodes, the largest power of two that can be accommodated within the 8,256 node machine. While a code of this size is running, albeit for only part of each week, a maximum of 64 nodes remains for other users. The extension of Kraken to 9,408 nodes will increase this by a factor of 19, providing an additional 1,216 nodes for smaller jobs to run concurrently. This will greatly improve availability of the system for smaller, "capacity" jobs while still allowing the extremely large "capability" jobs access to the NSF's most powerful supercomputing system.
The second part of the award will fund the operation of Athena, a 166-teraflop Cray XT4 that is currently ranked as the TeraGrid's third largest computational resource. Athena features 18,048 cores and 18 terabytes of memory and is an extremely reliable system, most recently used as a dedicated platform for climate, weather and quantum chromodynamics research. Athena will be available through the TeraGrid allocations process beginning Oct. 1, 2010, and will be allocated in conjunction with Kraken. This will allow NICS to maximize the usefulness of both of these leading resources, each of which are running at over 90 percent utilization, by apportioning researchers to the most appropriate machine.
"The availability of large-scale computing resources has quickly evolved our field of biomolecular simulation and computational chemistry and has enabled a move from validation and assessment of the methods into the realm of prediction and production in applications ranging from the design of new biomaterials to computer-aided drug design," said NICS user Tom Cheatham of the University of Utah. "The addition of time comes at a critical juncture as the TeraGrid and other machines available in the US for research are over-subscribed, inhibiting science across a wide range of disciplines.
Colin Morningstar of Carnegie Mellon echoed Cheatham's enthusiasm: "The additional allocation time will definitely accelerate our lattice QCD research and allow us to study quarks and gluons in much larger volumes and using lighter quark masses. We are very excited about the new possibilities that this creates."
The National Institute of Computational Sciences (NICS) is a joint effort of the University of Tennessee and Oak Ridge National Laboratory. NICS was founded in 2007, and is supported by the National Science Foundation and the State of Tennessee. NICS is a resource provider in the National Science Foundation's TeraGrid program, and is located at Oak Ridge National Laboratory, home to the world's most powerful computing complex.
The TeraGrid, sponsored by the National Science Foundation Office of Cyberinfrastructure, is a partnership of people, resources and services that enables discovery in U.S. science and engineering. Through coordinated policy, grid software, and high-performance network connections, the TeraGrid integrates a distributed set of high-capability computational, data-management and visualization resources to make research more productive. With Science Gateway collaborations and education programs, the TeraGrid also connects and broadens scientific communities.
Source: National Institute for Computational Sciences
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.