Here’s a collection of highlights, selected totally subjectively, from this week’s HPC news stream as reported at insideHPC.com and HPCwire.
>>10 words and a link
Sun and Fujitsu release new SPARC64 chip;
Details on Blue Waters;
Another big power wall, but don’t let us down;
DISA builds for a cloud future;
Sun’s terabyte tape;
AMD to take $900M charge in Q2;
Sun guides early on Q2, expects profit;
Mellanox establishes HPC council to get HPC to masses;
IMSL Fortran Numerical Library on Windows CCS 2003;
NVIDIA conference for entrepreneurs, VCs, interested in visual computing;
>>UK invests nearly $800M in science, HPC gets $230M chunk
A press release from the London-based Department for Innovation, Universities & Skills this week talks about a big investment that the UK’s Large Facilities Capital Fund (LFCF) is making in science and computing:
Almost £400 million is being made available through the Government’s Large Facilities Capital Fund to provide support for the development of nine multidisciplinary research projects focusing on a variety of areas, including long-term studies of economic, health and social development; the construction of new neutron beams to test the physical behaviours of structures such as turbine blades or the design of new drugs; and the development of modelling software to simulate future climate scenarios and cell interactions.
Details of the computing pieces? While the money isn’t actually allocated yet, it is “on the list,” which is a pretty good start. £50M ($100M US) is marked for a new computational sciences center at the Daresbury Science and Innovation Campus:
The Hartree Centre – Science and Technology Facilities Council (STFC). A new world-leading computational sciences centre for the UK. It is intended [to] model complex systems and processes such as climate variability and human biological systems. The centre will build on existing scientific expertise at Daresbury.
And £65M ($130M US) goes toward a next-generation supercomputer that will likely serve as (part of) Britain’s contribution to PRACE’s network of petascale centers:
High End Computing – Engineering and Physical Sciences Research Council. The project involves the procurement of the next generation supercomputer for the UK. This project is likely to form the UK’s initial contribution to the proposed European network of 3 to 5 leading edge computational systems, which will provide European Scientists with a suite of complementary computing capabilities that rival those in the USA and Japan.
As Andy pointed out in the comments on this story at insideHPC, it’s worth realizing that while HECToR had a £52M contribution from the LFCF, the overall project value was in excess of £100M. Thus it is reasonable to assume that the £65M quoted for the HECToR successor will be supplemented by additional funding from EPSRC and other research councils.
>>DoD funds Cray, PNNL, partners for data-intensive HPC project
PNNL announced this week that it is leading a multi-institutional team that has been awarded $4M to develop software for the Cray XMT for data-intensive computing. (For some additional background on data intensive computing, check out this article I wrote at HPCwire.)
The difference between the new breed and traditional supercomputers is how they access data, a difference that significantly increases computing power. But old software won’t run on the new hardware any more than a PC program will run on a Mac. So, the Department of Defense provided the funding this month to seed the Center for Adaptive Supercomputing Software, a joint project between the Department of Energy’s Pacific Northwest National Laboratory and Cray, Inc., in Seattle.
Other researchers in the software collaboration hail from Sandia National Laboratories, Georgia Institute of Technology, Washington State University and the University of Delaware.
PNNL’s article is interesting, with some additional results and background on applications:
In previously published work, PNNL computational scientist Jarek Nieplocha used a predecessor of the Cray XMT to run typical software programs that help operators keep the power grid running smoothly. Adapted to the advanced hardware, these programs ran 10 times faster on the multithreaded machine. “That was the best speed ever reported. We’re getting closer to being able to track the grid in real time,” said Nieplocha.
>>TACC’s Ranger Gets an Upgrade
TACC has announced that they have just completed the first upgrade to their fabled Ranger supercomputer. All 15,744 quad-core AMD Opteron processors have been upgraded from the original 2.0 Ghz cores to 2.3 Ghz. The additional 300 Mhz per core has boosted the overall peak performance by 75.4 Tflops. The massive Ranger cluster is now rated with a peak performance of 579.4 Tflops.
“Ranger was the first major supercomputing system to use AMD’s native quad-core technology, proving its performance on large-scale science,” said Jay Boisseau, TACC director. “We’re very excited to work with AMD and Sun to upgrade Ranger’s performance with even faster AMD processors, providing the national open science community with unprecedented computing power.”
The latest bump to Ranger’s clock might be just enough to push it to #3 on the Top500, past the BlueGene/P at Argonne. Of course, you’ll have to wait until November to find out. This upgrade also eliminates the pesky TLB bug that was resident in all of the 62k cores.