Visit additional Tabor Communication Publications
May 28, 2010
If the Icelandic volcano gods permit, this year's International Supercomputing Conference in Hamburg, Germany, will be the best-attended and the most exhibitor-laden show in the event's 25-year history. The ISC organizers expect upwards of 2,000 attendees and around 150 exhibitors, both of which would be records.
The conference starts Sunday, May 30, and runs through Thursday, June 4, giving the HPC faithful 5 days of unrelenting supercomputing revelry. I'll be there from start to finish, endeavoring to bring you the highlights of this year's event with our special live coverage from HPCwire. And there should be plenty to cover.
As always, the big problem for us in the HPC journalistic biz at these big supercomputing shows is finding the gold nuggets amidst a lot of shiny-looking news, sessions, and exhibits. In this article, I'll attempt to point out what I consider that can't-miss happenings at this year's conference.
One of the traditional events at ISC is the reshuffling of the TOP500 list, which represents the top supercomputing systems in the world by Linpack prowess. As far as what to expect this time around, I covered most of this in Wednesday's blog entry. In that post, I surmised that the top systems would only see one or two new petaflop entries, but since then, I found out about two other possible candidates.
One is the new 1.25 (peak) petaflop Tera 100 system from Bull, which was installed for the French Atomic Energy Authority (the CEA). According to the Thursday press release, the machine was just powered up on 26 May, so presumably they missed the deadline earlier this month to turn in the Linpack benchmark results for the June list (although perhaps they did some lab benchmarking before deployment).
The second system won't be officially announced until the week of ISC, so I can't really say anything about it yet, except that it too is a petaflopper and it's a brand new machine. If the owners got their Linpack results in on time, it will almost certainly be a top 5 system.
Turning to the conference proper, I'd like to point to a couple of keynotes that I think will be of interest to everyone. The first is Kirk Kaugen's opening day keynote on Monday. Kaugen is vice president of the Intel Architecture Group and general manager of the company's Data Center Group. He's supposed to talk about scale-up and scale-out technology as it applies to HPC, but according to a recent Intel blog post, Kaugen will also talk about how they're going to steer their Larrabee processor technology into the high performance computing realm. In December 2009, Intel revealed it had ditched Larrabee for the discrete high-end graphics market but left the door open to using the manycore technology for "throughput computing," so this refocus on HPC is not too big of a surprise.
The second keynote that should not be missed is Thomas Sterling's look back on the year in high performance computing. Something of an ISC tradition, Sterling always manages to make his year-in-review talk entertaining and informative.
On Tuesday and Wednesday are the two Hot Seat sessions, where execs from some of the big HPC vendors are scrutinized by a panel of "inquisitors." A partial list of companies participating in the event includes: Bull, NEC, IBM, Cray, Microsoft, HP, and Fujitsu. A surprise entry is Oracle, who will be represented by Sun Microsystems alum, Marc Hamilton, who now has the title of Vice President, HPC Sales Support. If I were an inquisitor, I think my first question would be: "What are you doing here and what have you done with my Sun HPC servers?" I'm guessing someone will ask that question, albeit more tactfully.
HPC analysts will be working the conference pretty hard this year. John Barr from the 451 Group will be presenting HPC market trends and forecasts as part of the HPC Advisory Council European Workshop on Sunday afternoon. And on Monday, IDC will host its traditional breakfast analyst briefing, where the analysts will provide their own take on 2010 trends and deliver some predictions for the year ahead. And finally, Addison Snell, who heads InterSect360 Research, (along with yours truly) will be providing some real-time analysis and commentary at the conference with our ISC podcast series on Tuesday and Thursday.
As usual, all of the major HPC vendors will be exhibiting at ISC this year, including a gaggle of smaller Europe-based HPC companies that don't usually make it to the larger US-based supercomputing conference in November. As I mentioned above, Oracle will be attending this year, and this will represent the first instance of the organization at an HPC venue. Up until now the company has said precious little about its HPC aspirations, so it will be interesting to see how they're positioning themselves in this market after the Sun merger.
NVIDIA will also be attending this year (as a co-exhibitor with Microsoft), representing the first time the GPU maker has made the trek to ISC. With GPU computing storming into the HPC landscape this year, it's little wonder that NVIDIA wants to get in on the fun. I counted 11 GPU computing presentations at the conference, and I'm guessing we'll see some buzz about the new top-end supers equipped with (or soon to be equipped with) the latest Fermi hardware.
There are a boatload of worthwhile presentations at the show that are too numerous to list. But I'll mention a few that look particularly interesting and you can follow the links to get the details:
Of course, there are social events in the evening, but if you happen to catch me at any of them, just remind me I'm supposed to be working. Otherwise, I hope to see you there.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.