Here is a collection of highlights, selected totally subjectively, from this week’s HPC news stream as reported at insideHPC.com and HPCwire.
10 words and a link
New Bull supercomputer design aims big, green
Appro Xtreme-X1 sports dual dedicated on-board QDR IB links
Chutes packed and ready to be deployed at Sun
Supercomputer used to find largest measured black hole
Matrox intros GPU platform aimed at industrial imaging
Mitrionics announces new SDK for its HPC-oriented FPGA solution
Allinea developing CPU/GPU hybrid debugging tools, aims at 32k cores
Microsoft announces Extreme Computing Group, headed by VP Dan Reed
Computation supports National Earthquake Hazards Reduction Program, House finds value
Illinois and France partner on petascale research
UNT builds $2M HPC center
$50M IBM super at U Toronto is Canada’s fastest
NVIDIA’s OpenCL drivers certified
SC09 conference registration open
Sun sponsors student party at ISC’09
HPC Advisory Council announces programs, workshops
A view from inside the team that built (and debugged) the Wolfram|Alpha HPC infrastructure
We wrote in mid-May about the HPC resources being used to power Wolfram’s new Alpha computational portal:
When Wolfram|Alpha launches, it will be one of the most computationally intensive websites on the internet….What computing power have we gathered in these facilities for launch day? Two supercomputers, just about 10,000 processor cores, hundreds of terabytes of disks, a heck of a lot of bandwidth, and what seems like enough air conditioning for the Sahara to host a ski resort.
But what about getting it all to work before launch day? That story is chronicled by one of the guys from the inside team in this post at the Wolfram|Alpha blog. It’s interesting because it shows that these things rarely go smoothly:
Given the broader audience the product was becoming viable for and given the public response that we had seen so far, what should we forecast as peak launch demand? How about being able to handle a peak of 2000 queries per second, ten times the earlier plan? Since we hadn’t even talked to a supercomputer vendor yet with about two months to go until launch, we had moved from prudent to very aggressive on both time frame and target.
Wolfram worked with R Systems and Dell to build out the two supers that serve as the primary engines behind Alpha.
We were then just days before launch, that put us with 140 nodes at our disposal, and final load testing could proceed. One cluster of the big Dell system handled 130 qps—check. Two clusters got 260 qps. We were cooking. Three clusters, 210. Uh oh. Four clusters, 120. !?@#%^. Maybe it was just a glitch. We tried it again, but round two didn’t fare any better. It was time for an emergency meeting. Everyone was on the case (Jeff and his systems engineering guys, plus Chris, Jamie, Grant, Mike, Oyvind, and many other folks), working non-stop to figure out the bottleneck. Something must have been thrashing, but what was the problem? The test rig? It checked out. The edge switch? That checked out, too. Ditto on the other end of the line. Core switch? Also fine. Was logging slowing us down? Nope. Were any of the databases saturated? Looked okay. The test log implied packet loss, as did the web server logs.
I’m skipping all the best parts — lots of late nights chasing the demo demons — to get to the climax of the story:
The Wolfram|Alpha logging data was being transmitted across the auxiliary network to the main office for aggregation before being sent to the monitoring systems to make those nice visualizations you see in the video. Chris from the systems engineering team ran a ping test on the auxiliary network during a load test. Latency skyrocketed. Bingo! Not enough allowed connections, so we were saturating the proxy.
After raising the number of allowed connections to something ludicrous, we tested again. No dice. Joshua and Mike continued monitoring all of the auxiliary traffic, and in this test the logging system was saturated. It wasn’t doing that before. There weren’t enough connections to the logging database. After Joshua and Mike implemented a fix, we did one more test. One cluster: 140 qps. Two clusters: 280 qps. Three clusters: 400 qps. Then we decided to go for broke. Six clusters: 750 qps. Then for R Smarr: 160 qps, 300 qps, 500 qps, 900 qps. Eureka!
It is an engaging story, especially if you’re the type of person who’s had to set up demos at a tradeshow and nothing worked until two hours before the show opened. Fun read.
Controversy continues in NM over big supercomputer
Last week a Silver City, NM, newspaper published a commentary by Paul Gessing, the president of a NM political nonprofit called the Rio Grande Foundation, which agitates for “limited government” among other goals. The title of the commentary clearly indicates the author’s point of view, “Supercomputer a waste of taxpayer money.”
The supercomputer in question is Encanto, of course, the big SGI machine built by the state of New Mexico in 2007; in that year the machine was number 3 on the TOP500. We predicted here back in February of 2008 that selling this project within the state as one that would pay for itself by selling cycles was a recipe for trouble. Evidently we were right, as the state hasn’t brought in much revenue relative to the investment:
According to the new report on the supercomputer which was published [by the New Mexico Legislative Finance Committee] in May, taxpayers have spent $13.8 million on the project to date. Gov. Richardson originally asked the state Legislature for $42 million for the project, but according to the LFC, documents provided by the governor’s science advisor indicate that $115.5 million will be required over a seven-year period for recurring and nonrecurring costs.
According to the LFC, the supercomputer known as Encanto has taken in only about $300,000 in cash.
Low, but not surprising. There are all kinds of fantastic reasons for a state or government institution to invest in substantial supercomputing resources, including economic development (several of the big car manufacturers that located in Mississippi did so in part because of the advanced engineering computation being done at MSU and other state universities). Directly generating revenue is not one of them. It seems that all they would have had to do is pick up the phone and call pretty much anyone in the HPC community to have found this out. Too bad they didn’t check. Since they framed the initiative as a money maker it will be a failure no matter how much good science and engineering is done with it — short of the cure for cancer, I guess.
NYT reports unconfirmed news that Sun has terminated Rock chip efforts
In terms of harbingers of Sun’s hardware future as part of the Oracle empire, this does not bode well. A post on Joe Landman’s blog alerted us to a blog post by Ashlee Vance at the NYT on Sun’s decision to terminate development of the five year old Rock chip project:
Sun Microsystems may have dropped a bit of weight by the time Oracle officially acquires the company. According to two people briefed on Sun’s plans, the company has canceled its Rock chip project, putting an end to one of its biggest revitalization bets.
According to Vance the company had no official comment, and the “two people” spoke on conditions of anonymity.
This marks the second high-end chip in a row that Sun has canceled before its release. These types of products cost billions of dollars to produce, and Sun now has about a 10-year track record of investing in game-changing chips that failed to materialize.
Sun has been relying on chips from Fujitsu for its larger servers while it waited for the Rock development to be finished. Now it is likely to just continue using Fujitsu chips, which should lower research and development costs. That’s probably good news for Oracle, which is in the process of acquiring Sun.
—–
John West is part of the team that summarizes the headlines in HPC news every day at insideHPC.com. You can contact him at [email protected].