Visit additional Tabor Communication Publications
September 24, 2010
In a thought-provoking piece over at ZDNet, Numerical Algorithms Group's Andrew Jones takes a look at the supercomputing power consumption equation, examining whether its current trajectory might not be so untenable.
There are a range of estimates for the likely power consumption of the first exaflops supercomputers, which are expected at some point between 2018 and 2020. But probably the most accepted estimate is 120MW, as set out in the Darpa Exascale Study edited by Peter Kogge (PDF).
At this figure, the supercomputing community panics and says it is far too much -- we must get it down to between 20MW and 60MW, depending who you ask -- and we worry even that is too much. But is it?
What follows is a comparison of today's largest supercomputers with their closest kin, major scientific research facilities.
In Jones' opinion:
[T]he largest supercomputers at any time, including the first exaflops, should not be thought of as computers. They are strategic scientific instruments that happen to be built from computer technology. Their usage patterns and scientific impact are closer to major research facilities such as Cern, Iter, or Hubble.
Thinking of the big supercomputers that way, their power consumption and other costs -- construction, operation, and so forth -- are comparable to other major research centers and not that outrageous, concludes Jones.
Jones also tackles the subject of whether it makes sense to continually improve and replace systems every couple of years (as we currently do) or whether it would offer more value to society to collaborate on the construction of one mega-supercomputer every decade -- putting ten years of resources into it, and then relying only on that system for ten years. There are, of course, pros and cons to each path. Because supercomputing performance increases exponentially, the first option results in a greater number of exflops per year, but also think of the resources saved with the second option by not having to continually rewrite and validate code and the value to society in having a 2030-era system ten years ahead of schedule.
Jones is not sold on either path, but wonders why we are so set on the first option without giving some consideration to the second. Check out the full article for more in-depth treatment of these ideas.
Full story at ZDNet
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.