Here is a collection of highlights, selected totally subjectively, from this week’s HPC news stream as reported at insideHPC.com and HPCwire.
10 words and a link
Shareholders sue to stop Sun purchase, Sun worries over bribery
Canadian HPCS 2009 program and workshops announced
NVIDIA releases CUDA 2.2
Rackable slides into SGI’s name, announces leaders
ANL video discusses Nimbus software for science in the cloud
Penguin in at U of Florida
U of Cambridge announces plans to rent cycles for research
Intel invests in new visual computing center
Fujitsu claims development of world’s fastest processor
NITRD Act passes house
Britain’s new green IT certification
IBM’s new System S aimed at stream computing
NEC Exiting Japan-Backed Supercomputer Project
NEC Corp. announced today that it will withdraw from a Japanese government-backed supercomputer project. The decision comes alongside cost cutting measures during the economic slowdown. The project was started in 2007 in an effort to bring NEC, Hitachi and Fujistu together with the Riken Institute to develop the next generation of supercomputers. NEC decided to pull out in light of a 296.6 billion yen (3.05 billion dollar) loss year to date (as of March):
“The company is working to strengthen its profitability by all means including cutting jobs and reviewing projects amid the exacerbating economic environment due to the global downturn,” NEC said in a statement.
“As the next-generation supercomputer project shifts to the phase of manufacturing, a large amount of investment linked to producing the hardware is expected,” it said. “The spending would significantly weigh on the company’s financial health.”
For more info, read the full article here.
The most dangerous answer…
…from a computer program, as a professor at my alma mater used to say, is an answer that looks “about right.” Dan Reed has an interesting post on his blog about accuracy in scientific applications:
The parallel application contains millions of lines of code, combining multiple models of physical, engineering, biological, social and/or economic processes, operating over temporal and spatial scales that span ten orders of magnitude. It was written by tens or even hundreds of graduate students, post-doctoral associates, software developers and yes, even a few professors, over a decade. It involves numerical libraries and functions from diverse research groups and companies, and a single execution requires thousands of hours on tens of thousands of processor cores. In short, it’s a typical example of an extreme scale high-performance computing code.
…
Are you afraid? We all should be. It is time to embrace the scientific process for computational science. We must view the execution of a large, multidisciplinary code as what it is — an experiment, with all the possible error sources attendant with any physical experiment. This includes repeating the experiment (computation) to determine confidence intervals on the answer, conducting perturbation studies to determine the sensitivity of the answer to environmental (hardware and software) conditions, identifying sources of experimental bias and defining the experiment rigorously for independent verification.
Those are the beginning and ending paragraphs. The stuff in the middle is even better; I recommend a read.
This also ties in nicely with one of the primary recommendations of the International Assessment of Research and Development in Simulation-Based Engineering and Science (SBE&S), released late last month, and which I summarized for HPCwire last week. A representative snippet from that report on this topic:
A report on European computational science (ESF 2007) concludes that “without validation, computational data are not credible, and hence, are useless” …The data and other information the WTEC panel collected in its study suggests that there are a lot of “simulation-meets-experiment” types of projects but no systematic effort to establish the rigor and the requirements on UQ and V&V that the cited reports have suggested are needed.
UK Met tries to cure its black thumb
The UK Met Office (the British government’s weather forecasting agency, founded in 1854) had some bad press in January of this year when the TimesOnline ran an article lambasting the organization for the carbon footprint of its new IBM supercomputer:
For the Met Office the forecast is considerable embarrassment. It has spent £33m on a new supercomputer to calculate how climate change will affect Britain — only to find the new machine has a giant carbon footprint of its own.
“The new supercomputer, which will become operational later this year, will emit 14,400 tonnes of CO2 a year,” said Dave Britton, the Met Office’s chief press officer. This is equivalent to the CO2 emitted by 2,400 homes – generating an average of six tonnes each a year.
Now they are trying to get back out in front of that story with an emphasis on making energy considerations in the next upgrade more clearly decision drivers:
The Met Office is planning to upgrade to its high performance computing systems in the next 18 months and is focusing on how to make those systems more efficient, according to the organisation’s head of IT services.
One of the techniques the Met Office has hit on is using direct current (DC) to power its servers rather than AC, to avoid the large losses of power during conversion from AC to DC, according to IT chief Steve Foreman, speaking at the Green IT ’09 conference in London, on Thursday.
…
According to Foreman, the organisation is also looking at other ways to improve the efficiency of its high performance computing systems — used for weather modelling — such as increasing the temperature in its data centres.
That last bit is becoming quite popular; you’ll recall that Pete Beckman at ANL’s Leadership Computing Facility is doing the same thing with much success. Still, the UK Met is warning everyone (ahead of time for a change) that even though they are doing what they can, supercomputing still takes a lot of power:
“Our supercomputers use something like 40 to 50 percent of our entire electricity usage in the organisation at the moment – that is about to go up to 80 percent,” he admitted. “Its going up because in order to provide more accurate weather information we need more computing power. We are getting more calculations per watt but the demand for calculations far exceeds the rate at which the suppliers are able to reduce the power power consumption.”
—–
John West is part of the team that summarizes the headlines in HPC news every day at insideHPC.com. You can contact him at [email protected].