Visit additional Tabor Communication Publications
October 23, 2008
Here's a collection of highlights, selected totally subjectively, from this week's HPC news stream as reported at insideHPC.com and HPCwire.
>>10 words and a link
insideHPC Exclusive: the complete 411 on SC08 Cluster Challenge Teams;
OpenSolaris on IBM gear courtesy;
Sun announces first storage blade;
Sun announces new datacenter design, strategy, and build services;
Plan for the WRF winter tutorials;
Cray and KMA invest further in Earth System Research Center;
Intel ponies up $120M to bring youth into science and math;
MATLAB for grids and the cloud, compiler for standalone apps;
TACC and UnivaUD partnering on next generation HPC software stack;
>>Getting to exascale with volunteers and GPUs
At this scale, clusters and supercomputers run into problems with power consumption and heat dissipation, so Exa-scale computing using these approaches is probably many years away. However, there may be a much faster and cheaper path to Exa-scale, using a combination of volunteer computing and graphics processing units (GPUs).
Could this scenario be realized in the near term, say in 2010? In my opinion, it's near-certain that GPUs will reach 1 TeraFLOPS by then, and a large percentage of PCs will be available to run BOINC (although the advent of 'green computing' will decrease availability somewhat). The hard part will be getting 4 million GPU-equipped volunteered PCs; there are currently about 1 million PCs participating, not all of them GPU-equipped, so an order-of-magnitude increase is needed.
The generally loose coupling between nodes in volunteer grids and the social aspect of such entities make them inappropriate for many classes of problems. But in those cases where they are appropriate, I agree with David that they will probably offer the fastest path to exascale computing. Like Blanche Du Bois, you too can "rely on the kindness of strangers." Hopefully it will work out better for you than it did for her (yikes).
>>InsideTrack: Rocks on Solaris
Earlier this week, we posted a preview of the Sun booth at SC08. In it, we speculated on the appearance of the Rocks cluster distribution in their booth, running Sun's Solaris operating system no less. For once, our speculation was correct! Thanks go out to Mason Katz, Greg Bruno and Anoop Rajendra (Solaris port project lead for the Rocks team) for confirming the news.
Our display at the Sun booth will consist of
1. A small cluster running an alpha version of Rocks on Solaris.
2. Fully automated provisioning of Solaris compute nodes, and Thumper/Thor (NAS) appliances as part of the cluster from a Linux frontend.
3. Rolls support.
4. MPI support using the Sun HPC Cluster Tools product.
5. Sun Grid engine batch system support, and demonstration of running MPI jobs using SGE.
He goes on to comment:
The most important take-away from this, (we hope) will be that -- Rocks brings the same ease of installation and use to deploying Solaris on a cluster, as it has brought to Linux for the past few years.
Very cool stuff! We've heard rumors on this for quite some time, but no concrete news to report. Now it's confirmed.
>>The WETA Digital fire that might have been
A commenter left a comment this week on a post from back in July about WETA Digital's growing HPC capacity. You know WETA already: they are the special effects rock stars behind Lord of the Rings, Fantastic Four: Rise of the Silver Surfer, X-Men: The Last Stand, and others. HP announced during Dresden that the company has installed 100 TFLOPS of rendering capability in 4 different clusters that rank 219-222 on the that Top500 list.
Anyway, Noir left a comment about a fire on that story (comment here). I won't reproduce the comment in its entirety here since it includes the entire Computerworld story, but here are excerpts:
Not many people are talking about the Fire that happened out at WETA. http://computerworld.co.nz/news.nsf/news/D378E7E589536D7DCC2574CC000AAD18
Computerworld had been told an HP blade server caught fire, bringing out the fire brigade....Partly true, according to a PR spokeswoman at Weta. She says a "sensitivity" caused a hot spot in an old computer room, about to be decommissioned, which triggered an automatic call-out by the fire brigade. Asked to define "sensitivity", she couldn't.
To say that not many people are talking about it is an understatement. In fact, a couple reasonable Google searches turned up zero hits other than the cited story at Computerworld. While it's true that the spokeswoman's explanation is totally bogus, it seems reasonable that this could have happened as indicated and that this really isn't a story. But, if you're from that part of the globe and know anything, drop us a line.
>>Sun guides lower for Q1 2009, anticipates loss
This week Sun issued a press release giving the market a heads up that their Q1 results would be lower than anticipated
Sun expects to report revenues for the first quarter of fiscal 2009 in the range of $2.950 to $3.050 billion, as compared with $3.219 billion for the first quarter of fiscal 2008....Sun anticipates reporting GAAP net loss per share, before the impact of the potential goodwill impairment charge discussed below, for the first quarter of fiscal 2009 in the range of $(0.25) to $(0.35).
To put this in perspective Sun posted a profit for 2008, with profits in three out of four quarters (Q3 was the outlier).
"Sun and its customers are seeing the impact of a slowing economy. We believe we are positioned to offer the kinds of products that can radically help customers reduce expenditures for their infrastructure from Open Storage to Solaris-based Chip Multi-Threading (CMT) systems to offering the most eco-efficient systems in the market," said Jonathan Schwartz, CEO of Sun Microsystems.
In other words, "We have no reason to think we'll make money when everything else is tanking, but that's how we hope it works out." Sun will report final Q1 results Thursday next week.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.