Visit additional Tabor Communication Publications
July 05, 2012
There has been a lot of discussion regarding the end of Moore’s Law, almost since its inception. Renowned leaders in high performance computing and physics have predicted scenarios detailing how chip advancements will eventually come to a halt. Last week, IEEE Spectrum dedicated a podcast to the subject and talked about a number of design changes aimed at extending silicon’s viability.
In a recent IEEE Spectrum article, associate editor Rachel Courtland explained that silicon has become increasingly difficult to work with as semiconductor manufacturers continue to push the physical limits of the technology. Transistors have become so small, that they have begun to leak electrical current. This problem has led to a search for new technologies that may eventually replace or enhance conventional chip designs.
Courtland met up with Bernd Hoefflinger, editor of Chips 2020, a book written by experts in the field explaining their thoughts regarding the future of computing. In Courtland’s interview, Hoefflinger noted that computational performance is not the only issue at hand. The power consumed by these technologies has a profound impact on their practicality. Said Hoefflinger:
“They expect 1000 times more computations per second within a decade. If we were to try to accomplish this with today’s technology, we would eat up the world’s total electric power within five years. Total electric power!”
He was referring to Dennard’s scaling, which is related to Moore’s Law. Essentially, as transistors get smaller, they will increase in speed and consume less power. Unfortunately, this phenomenon is losing steam and overcoming this limitation has become a primary focus by semiconductor designers. Hoefflinger believes if the power needed to compute a simple multiplier could be reduced to 1 femtojoule, silicon will keep Moore’s Law alive for the next decade. A femtojoule is roughly 10 percent of the energy fired from a human synapse.
To reach these low-power benchmarks, new 3-D circuit designs have emerged. Currently, 3-D chips have entered the market, using wires to connect multiple dies together. In addition, tri-gate or FinFET transistors have been developed, but Hoefflinger thinks that another design holds more promise.
According to him, 3-D merged transistors can be developed that combine two transistors into a single device. Instead of designing p-doped and n-doped transistors with their own gates, they share a gate with a PMOS transistor on one side and a NMOS transistor on the other side. These have sometimes been referred to as “hamburger transistors.”
Another method to reduce power has to do with how calculations are performed. For example, if multiplication was performed starting with the most significant bits (rather than the least significant bits), it could reduce the amount of transistors required for a calculation. While the reduction might not drop the energy to one femtojoule, it may bring consumption down “by an order of magnitude or two”.
Lastly, Hoefflinger suggested changing chip circuit architecture similar to communication circuitry. The change in design would allow for an integrated error correction, also leading to lower operational voltage.
If all of these suggestions for power reduction are implemented, it may extend Moore’s Law beyond 2020. Hoefflinger believes it could go either way, but is encouraged by the fact that these issues are getting a lot of attention right now.
Full story at IEEE Spectrum
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.