Visit additional Tabor Communication Publications
June 25, 2009
One of the more heavily attended presentations of ISC'09 was the "Cloud Computing & HPC: Synergy or Competition?" session, which took place on Wednesday morning. The interest from ISC attendees seems to reflect the industry's current obsession with all things cloudy. There was a good balance of skepticism and optimism on the panel, but overall I came away feeling they never really addressed the session's question.
Representatives from HP (Richard Kaufmann, CTO Scalable Computing Infrastructure Organization), Sun Microsystems (Marc Hamilton, VP HPC & Cloud Computing), Microsoft (Dan Reed, Scalable & Multicore Computing Strategist), the Jülich Supercomputing Center (Thomas Lippert, Director, Jülich Supercomputing Center), Google (Robin Williamson, Engineering Director), Amazon (Simone Brunozzi, Amazon Web Services Technology Evangelist), and Yahoo (Dr. Sanjay Radia, Senior Architect, Hadoop Project) gave their interpretation of the cloud phenomenon, and participated in a panel discussion at the end to field questions from the audience.
There was general agreement on the benefits of cloud computing: elastic capacity, pay-per-use model, platform abstraction, economies of scale, and built-in fault tolerance. Unfortunately -- and maybe significantly -- there didn't seem to be much consensus about whether the clouds would usurp traditional HPC infrastructure as the platform of choice.
In particular, the reps from the traditional cloud providers -- Google, Amazon and Yahoo -- didn't directly address how general-purpose clouds would evolve to address the needs of high performance computing. They did mention frameworks like MapReduce and Hadoop as being suitable for processing extremely large data sets in a highly parallel manner. In particular, Simone Brunozzi highlighted Amazon's Elastic MapReduce Web service, which is specifically designed for data-intensive apps like data mining, machine learning, financial analysis, scientific simulation, and bioinformatics. But no one in this group delved into performance issues or the need for more specialized infrastructure geared toward HPC.
The group from the HPC contingent (HP, Sun, Microsoft, Jülich) pointed out that cluster interconnect performance, in particular, will need to be addressed before cloud computing gets much traction with supercomputing users. Amazon EC2 gets halfway there by offering multiple types of servers, called instances. The instances are set at different price points depending upon the server profile -- CPU horsepower, memory capacity and I/O capability. But as of today there is no option for, say, InfiniBand-equipped servers. The general consensus from HPC practitioners is that the lack of a high-performance fabric in these general-purpose clouds will restrict adoption. "It's all about the interconnect," noted HP's Richard Kaufmann.
There are also data security and privacy issues, but they apply to a range of applications, not just HPC. These concerns are well known to cloud providers and presumably will be addressed more completely as a greater number of users demand them.
Sun's Marc Hamilton brought up the issue of public and private clouds, noting that if your organization can achieve its own economy of scale with regard to computing capacity, private clouds may be the way to go. According to him, public and private clouds can live side-by-side, but only if interoperability (i.e., cloud API standards) are developed. Even the public cloud arena would benefit from these standards, since users would rather not be locked into a single provider. Hamilton, for example, noted that while the cost of entry into the Amazon cloud is very low, the cost of exit may end up being high. In fact, he blamed the lack of interoperability in Sun's Network.com offering as a major contributor to its demise.
Thomas Lippert took on the role of the cloud skeptic, especially in regard to the kind of cutting-edge supercomputing that goes on at places like Jülich. He believes the cloud model won't support leadership supercomputing. And it's not just the performance issue. The whole supercomputing ecosystem at that level is so specialized (support, hardware, and software) and the lifecycles of such systems (3 to 5 years) are so limited, that the cloud model wouldn't apply at all.
Lippert is probably right here. Although elite supercomputing is partially based on commodity hardware and software, the resulting infrastructure and applications are highly customized. Moving the grand challenge application people to the cloud would be like trying to convince Formula One racers to take the bus. Efficiency is not the driving force here.
On the other end of the spectrum was Microsoft's Dan Reed. He believes it's inevitable that cloud will engulf high performance computing, or at least the vast majority of it. The driver will be economics, inasmuch as the cloud makes computing and storing data in bulk extremely inexpensive. The idea is that just as commodity components crowded out specialized HPC architectures, cloud platforms will eventually edge out traditional HPC infrastructure.
Reed thinks much of the resistance to cloud computing by HPC users is actually sociological, not technological. Outside of the supercomputing realm, most users don't care about infrastructure. They're being paid to focus on their applications and produce results. Most of them would like to avoid dealing with the inner workings of the platform. As Reed put it: "Successful technologies are invisible."
Posted by Michael Feldman - June 25, 2009 @ 2:23 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.