Visit additional Tabor Communication Publications
January 29, 2013
Science Live hosted an online chat last Thursday, entitled The Future of Supercomputers, featuring special guests and prominent HPC experts Jack Dongarra from the University of Tennessee and Horst Simon from Lawrence Berkeley National Laboratory. The discussion focused on the next big thing in supercomputing, the coming class of exascale systems and all that entails, namely, developing useful machines that are hundreds of times faster than the current best-of-class.
Science Magazine's Robert Service begins by asking a basic, yet crucial question: Why? Why do we need this level of computing power? Jack Dongarra responds that the benefits reach into virtually every segment of technology and science, from energy research to life science, manufacturing and even entertainment. He argues that the more powerful a nation's computer capability, the better it can compete on the global playing field. What's more, there are "considerable flow-down benefits" to the entire IT field, from smaller computer systems all the way to handheld consumer devices.
Concerning the main challenges to fielding such systems, Horst Simon echoed a common opinion: power consumption.
"Extrapolation from today's technology to the exascale would lead to systems with 100 MW or more power requirements," noted Simon, using the current TOP500 chart-topper Titan as an example. The ORNL machine requires 8 MW to output 20 petaflops, so a similar exaflops machine would require 400 MW. He estimates the cost to operate such a system at $400 million per year.
The HPCers both emphasized that breaking the exascale barrier will require a concerted effort on multiple fronts, as well as major changes to hardware, software, algorithms and application – it's an ecosystem.
In Simon's words: "There is no single revolutionary strategy that will get us to Exaflops. It will have to be several breakthroughs that need to be achieved in the same timeframe. That makes it so hard."
Simon and Dongarra also stressed the major investment that will be required, especially if the US is to maintain its technological lead.
Simon observed that China has the "budget and willingness" to beat the US to exascale, but in his opinion, they tend to be emulators rather than true innovators. Sometimes this strategy pays off, however, as when China leveraged US GPU expertise to achieve supercomputing dominance in 2010 with Tianhe-1A.
Just the cost of building an exascale-class system in the 2020 timeframe is $200 million, according to Dongarra's best estimate, and that figure is only for the actual machine – it does not take into account the difficult research that comes beforehand.
At about the half-way mark, we were reminded that this was very much an open forum when an audience member chimed in to ask what "HPC" stood for. What a perfect opportunity to consider the role of community outreach in encouraging public support for HPC.
The remainder of this one-hour talk delves into a multitude of important topics as they relate to exascale computing, including the relevance of the Linpack benchmark, programmability challenges, the stagnation of government funding, environmental implications, and the role of international collaboration. The entire transcript is available online and is well worth the 10 or 15 minutes it takes to read.
Thank you to Science Magazine and writer/moderator Robert Service for hosting this important conversation.
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.