Visit additional Tabor Communication Publications
June 11, 2010
ISC Celebrates 25 Years
Congratulations to Hans Meuer and the "old timers" -- Hans' reference to the six colleagues who were with him at the very first ISC event. Presented with their designations from 25 years ago, they are:
At 74 years old (June 7th was his birthday, Happy Birthday Hans!), this jovial professor runs ISC as if it's still a quaint gathering of his European HPC colleagues. It should be noted that this open atmosphere of camaraderie and friendship lends to great discussions, networking and collaborations. Celebrating its 25th year, the event is now "the" HPC event in Europe. Congratulations on another successful year, Hans -- wishing you many more to come!
Hans Meuer, General Chair ISC; Wolfgang Gentzsch, ISC Cloud General Chair, Contributing Editor of HPC in the Cloud; Tom Tabor, Publisher HPC in the Cloud & HPCwire
Intel – the biggest announcement at the event
Editor Michael Feldman covers Intel's surprise announcement and its plans for building an HPC coprocessor. Many were not briefed and were thus caught by surprise with this announcement. An Intel source informed me that the board approved the announcement just days before ISC, giving their HPC team little opportunity to get the word out to their hardware partners.
The TOP500 is not all that exciting this year, but continues to get great global coverage. Everyone loves a horse race and supercomputing is no exception. Coverage of the TOP500 came in from the BBC, MSNBC, and Cnet, just to name a few.
Here are a couple of comments from two ardent followers of the list:
Michael Feldman, editor of HPCwire, said, "A Chinese supercomputer called Nebulae, powered by the latest Fermi GPUs, grabbed the number two spot on the TOP500… It's also good news for China. That country is developing its supercomputing resources at a rapid pace now, especially at the top end of the spectrum. This latest list puts 24 Chinese systems in the TOP500 -- tied with Germany, and trailing only the US, UK, and France. And from an aggregate performance standpoint, China is second only to the US.
Besides the China-GPU excitement, the rest of the TOP500 news was rather humdrum."
Chris Willard, Chief Research Officer for Intersect360 Research, wrote: "…outside of the HPC technical community, the computer science part of the TOP500 is largely ignored, reducing one of the most complex technologies and markets in the world to a few dozen statistics. This is like sending your child to medical school based on which institution has the highest-rated basketball team, or like assuming that one can understand chemistry by examining the bottom rows of the periodic table."
Microsoft – Modeling the World
Imagine the day with HPC in the cloud and swiping your credit card to gain access and pay for all you might need. This is where Microsoft is heading. The belief is there are millions of HPC users in the "missing middle" who don't necessarily need petascale technologies but need HPC to solve problems. Isn't this what we've been working toward all these years? Since 1985 at the start of the Supercomputing Era, this is what we envisioned and we're now ever so close!
IDC Market Update
IDC released its May 2010 HPC market update (slides of the report are available here in PowerPoint). Overall, the HPC market is projected to grow for some of the following reasons:
New Modeling and Simulation Leadership Panel
Intersect360 Research announced the formation of their Modeling and Simulation Leadership Panel. They're inviting organizations to become members of this worldwide panel of organizations using computational modeling, simulation and analytics to advance their cutting-edge positions in engineering development and scientific research.
Members will be involved in steering the direction of the HPC industry as it applies to computational modeling, simulation and analysis.
The Russians are coming...
Have you heard of T-Platforms? No surprise if you haven't, yet another AMD/Intel hardware systems vendor. They're the "chosen one" of the Russian government. Essentially, whatever they sell, the Russian government will buy and if they don't have something you're looking for, the Russian government will pay them to build it. An envious position to be in for any HPC vendor. Having said that, this isn't that unusual from a nationalistic HPC perspective, the US did this in the formative years of its HPC era. In any case, T-Platforms opened an office in Europe and plans "significant growth in this market." I'm sure there are a few vendors who would like to welcome them to the very competitive EMEA marketplace, namely IBM, Dell, HP, Bull, SGI, Cray, Fujitsu and NEC, just to name a few. We wish them the best of luck.
Race to Exascale
We caught up with Jack Dongarra and did a very insightful interview where he outlined the challenges of achieving exascale computing and the global efforts towards that end. It should be no surprise that software continues to be the major bottleneck.
In a separate piece, Thomas Sterling and Chirag Dekate of Louisiana State University covered the major activities that have just been created during the last year to engage the talents of the international community including experts in hardware, software, algorithms, and domain science. They named IESP, DOE X-Stack, and DARPA UHPC plus many other smaller initiatives.
We will be significantly expanding our coverage of this very exciting new stage of HPC. Keep an eye on our expanded coverage of the race to exascale.
Posted by Tom Tabor - June 11, 2010 @ 11:18 AM, Pacific Daylight Time
Tom is the publisher of HPC in the Cloud. He has over 30 years of experience in business-to-business publishing, with the last 22 years focused primarily on High Productivity Computing (HPC) technologies.
No Recent Blog Comments
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.