Visit additional Tabor Communication Publications
May 05, 2011
As the designated enforcer of Moore's Law, Intel has consistently found a way to keep its two-year process shrink cadence on schedule. This time it's with three-dimensional semiconductors. On Wednesday, Intel announced it has once again "re-invented" the transistor with its new 3D Tri-Gate technology. "For the first time in history, the transistor has officially entered the third dimension," declared Intel Senior Fellow Mark Bohr.
The company will begin churning out the Tri-Gate silicon in its upcoming "Ivy Bridge" processors, the 22nm shrink of the current generation 32nm Sandy Bridge processors. Intel maintains the 3D transistor technology will only add 2 to 3 percent to the cost of manufacturing the wafers, so system costs should take only a minor hit.
The Tri-Gate technology has been in the works since 2002 in anticipation of the time when the traditional 2D planar technology would run out of steam. Intel actually demonstrated Tri-Gate circuits in SRAM back in 2009, but this is the first time the 3D technology will appear in microprocessors.
The problem is that as semiconductor geometries shrink, it gets increasingly difficult to prevent the electrons from leaking out of the gates, especially at higher voltages. The solution was to build up them up into three-dimensional fin structures so they can be wrapped around the channel, making it more difficult for the electrons to escape. Essentially they've blocked the electrons on three sides instead of the one in the flat transistor.
The first Ivy Bridge processors for are slated for production in the second half 2011, probably following Intel's usual pattern of starting with the desktop chips and following with the Xeon server parts. The mobile Atom chips will be the last to see the 3D technology, with availability not expected until the second half of 2012.
The other foundry makers -- the IBM fab consortium TSMC, GlobalFoundries, et al. -- all plan to transition to 3D transistors, (where the technology is more generally known as FinFET) at some point, but none have immediate plans to do so. TSMC says it will implement its 3D technology on the 14nm node. According to Bohr, the commercialization of its Tri-Gate technology puts Intel three years ahead of the other foundries.
That remains to be seen, but Intel does appear to be at least temporarily widening its lead in process technology. But why should anyone care? As AMD, NVIDIA and other chip vendors like to remind us, people buy computing products not process technologies.
But the fundamentals do matter. Computing heft and power efficiency begins at the silicon and Intel has maintained a built-in advantage against its competition by getting to the smaller geometries first. Less leaky gates, means you can boost clock speed for more performance or, if power consumption is the goal, slow down the clock but still maintain the performance of the previous generation. According to the Intel, the current Tri-Gate will enable a 50 percent power reduction at constant performance or a 37 percent performance increase at low voltages when compared to the 32nm technology. Those are not just marginal improvements.
For performance-minded customers, more efficient transistors means higher clocks for faster execution, and Intel has taken advantage of this to maintain its speed advantage. In the x86 server realm, AMD 45nm Magny Cours Opterons have had to go against higher clocked Intel's 32nm Westmere Xeon processors. To blunt the process technology disadvantage, AMD has resorted to higher core counts and greater emphasis on memory bandwidth. That's a good strategy, especially for high performance computing, but even there Intel has maintained a dominant market share. The next matchup in 2012 will pit the 22nm Ivy Bridge Xeons against the 32nm Interlagos Opterons, so that's shaping up to be a rerun of previous Xeon-Opteron battles.
Meanwhile, NVIDIA's Fermi processors are on TSMC's 40nm process node. Although GPUs compete only indirectly with the x86 CPUs, in the HPC space, Intel's upcoming MIC accelerator certainly will. And since the first MIC product, Knights Corner, is going to be using Intel's new 3D 22nm technology, the performance matchup should be especially interesting. Of course, by the time Knights Corner hits the streets (presumably the first half of 2012), NVIDIA will have moved on to TSMC's 28nm node and the next-generation Kepler architecture, so don't expect any GPGPU-killing performances from MIC on its first go-around.
In 2013, the Intel's 22nm technology will be implemented for Haswell, the next microarchitecture that will supplant Sandy Bridge. (With Haswell, we might get our first taste of actual 3D processors, aka chip stacking.) Then in 2014 , Intel intends to extend the Tri-Gate technology to the 14nm node. Beyond that, it looks like Intel may have to re-invent the transistor once again.
Here's a short video of Mark Bohr talking about the new technology in a cute Disney-like presentation:
Posted by Michael Feldman - May 05, 2011 @ 4:26 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.