Visit additional Tabor Communication Publications
September 01, 2011
Thanks to the issues of global climate change and rising energy costs, there has been an unrelenting focus on minimizing power consumption across nearly every industry, including computing, and more recently supercomputing. The prospect of building exascale machines that won't be able to be plugged in because of energy costs looms large.
According to a 2010 DOE Office of Science report on the challenges and opportunities of building and using exaflop supercomputers, "All of the technical reports on exascale systems identify the power consumption of the computers as the single largest hardware research challenge."
The report goes on to state the fundamental issue: money. At a million dollars or so per megawatt (MW) per year, the cost of running these machines is making the big government agencies more than a little nervous. Today the largest multiple-petaflops supers on the planet cost $5 to $10 million per year to power. The energy bill for an exaflop built with current technology would run over $2.5 billion a year, says the report.
Not surprisingly, both the DOE and DARPA have zeroed in on energy efficiency on their exascale initiatives, and target 20 MW as the ceiling for power consumption for a single exaflop. That's only about twice the consumption of today's K supercomputer, which, at 8 petaflops, is the most powerful computer in the world (Linpack-wise at least). Since an exaflop represents more that 100 times the performance of that machine, obviously a lot of energy-saving engineering has to be developed over the next several years to hit that 20MW target.
But is this line of thinking justified? This week's contributed feature by Numerical Algorithms Group's Andrew Jones manages to do a good job at exposing some of the problems with this aggressive focus on exascale power consumption. From his perspective, the concern about energy costs has to be placed against the backdrop of what the machines can accomplish. He writes:
Are we really saying, with our concerns over power, that we simply don't have a good enough case for supercomputing -- the science case, business case, track record of innovation delivery, and so on? Surely if supercomputing is that essential, as we keep arguing, then the cost of the power is worth it.
Indeed. According to exascale's proponents, these supercomputers will enable significant advances in nuclear energy and fusion technology, climate modeling, aerospace engineering, battery design, and combustion. Ironically advancements in these technologies could revolutionize --or at least significantly evolutionize -- energy production, and thus enabling a greater supply of power on which these same machines are so dependent.
There is a cultural imperative in play here too. And that is that successive computer technologies must become cheaper and more power efficient than the previous one, regardless of the end user value those technologies delivers. While this has actually come to pass in most of the computer industry, it has not at the upper echelons of supercomputing. Those machines still cost hundreds of millions of dollars and their power consumption is rising.
In fact, as recently as two years ago the average power consumption of the top 5 supercomputers for was 3.22 MW; today the top five average is 4.97 MW. At that rate, the average top 5 machines in 2019 will be around 27.96 MW, and one or more of those should be an exaflop machine. That's not too far off from 20MW, but barring the artificial acceleration of this curve with a concerted effort at energy efficiency, we'll overshoot the power target by a fair margin.
But that is only for the first batch of such machines that will blaze the trail at the end of the decade. The greater value of exascale supercomputing will be performed by less costly, less power-hungry, and, presumably, more numerous machines built and deployed in the 2020s and beyond -- analogous to the petascale system of the current decade. Those supercomputers will be more practical in every way than the first custom-built exaflop systems of the late 2010s.
According to Jones, the biggest roadblock for delivering exascale computing is software. Even though there are several initiatives in the pipeline to get exascale-capable tools, algorithms, and libraries developed in advance, applications will be hard pressed to take full advantage of the first exascale system. Even today, there are only a handful of applications that can achieve a sustained petaflop, three years after Roadrunner hit that milestone.
Unlike hardware advances, software innovation comes in fits and starts and requires a whole ecosystem of talent to move forward. Developing software has been the enduring challenge for computing of every stripe and certainly requires more sophistication than sending a check to the power company. As Jones puts it:
It certainly requires money, but it needs other scarce resources too, specifically time and skills. That involves a large pool of skilled parallel software engineers, scientists with computational expertise, numerical algorithms research and so on. Scarce resources like these are possibly even harder to create than money!
Posted by Michael Feldman - September 01, 2011 @ 8:45 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.