Visit additional Tabor Communication Publications
September 19, 2012
Near-Threshold Voltage, or NTV, has the potential to significantly cut energy requirements for high performance computing. This is becoming especially important for the largest supercomputers, which are already well into the multi-megawatt realm and are expected to hit tens of megawatts in the exascale era.
Intel recently demonstrated their NTV capabilities at ISSCC 2012, operating an x86 microprocessor on only 2 milliWatts of power. The company published three papers on their results, which were analyzed and discussed in an article at Real World Technologies by David Kanter.
The threshold voltage is the voltage required to generate a minimum of current across a transistor. Intel has discovered that the most efficient use of energy would be to operate a circuit near that of its threshold voltage, that is, the amount required to turn a transistor on.
There are a couple of intrinsically tricky things about operating at such a low voltage. The first is limiting dI/dt, the fancy mathematical way of expressing change in current over time. Rapid spikes or drops in current, especially those that would occur as a result of a particular transistor accidentally dropping beneath the threshold, can create computational errors.
Ideally, all transistors would be created equally. Statistically, however, since there sometimes lie billions of transistors on a given chip, some will perform worse that others.
Another challenge to overcome is the resulting power loss. The power available is proportional to the square of the voltage, such that a 10 percent reduction in voltage leads to a 19 percent reduction in power. While this reduced voltage would be a great way to increase efficiency, it would also be a great way to ensure your CPU does not have the juice required to run what it needs to.
Further, NTV significantly decreases frequency. “The 32nm Pentium core,” Kanter said about a core that ran using NTV “increased efficiency by about 5×, by running at slightly under 100MHz. The maximum frequency was 915MHz, so the absolute performance decreased by about an order of magnitude.”
As he notes, NTV would be impractical for general-purpose CPUs, as they are generally used for applications that expect reasonable single-threaded performance. Thus they require the higher voltages needed to drive faster clocks. On the other hand, HPC and its massively parallel computing environment could benefit greatly from NTV.
“Based on our analysis of these papers,” Kanter wrote, “Near-Threshold Voltage computing techniques are most applicable to highly parallel workloads. Generally, NTV is an ideal fit for HPC workloads and works very well for graphics, but not general purpose CPUs.”
Since HPC is highly parallelized and requires backups and fail-safe mechanisms throughout a computation, it can withstand the consequences of a single transistor giving out. HPC computations are also not expected to happen anywhere near real time, making the frequency decrease less of a problem. This is especially true of “throughput” accelerators like GPGPUs and Intel’s Xeon Phi, which are naturally frequency-constrained because of their high core counts.
There is a sense that this technology is being developed to specifically benefit HPC rather than it accidentally doing so. This is not only hinted at by the Intel papers themselves, but is also indirectly supported by the people funding the papers, specifically the US Government. “Perhaps most telling,” Kanter wrote, “US government grants typically focus on areas of national interest. Graphics simply is not vital to the country, whereas HPC is a critical tool for the Departments of Defense, Energy, and any number of intelligence agencies.”
Full story at Real World Technologies
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.