Visit additional Tabor Communication Publications
January 03, 2013
New design for a basic component of all computer chips boasts the highest 'carry mobility' yet measured
CAMBRIDGE, Mass, Jan. 2 -- Almost all computer chips use two types of transistors: one called p-type, for positive, and one called n-type, for negative. Improving the performance of the chip as a whole requires parallel improvements in both types.
At the IEEE's International Electron Devices Meeting (IEDM) in December, researchers from MIT's Microsystems Technology Laboratories (MTL) presented a p-type transistor with the highest "carrier mobility" yet measured. By that standard, the device is twice as fast as previous experimental p-type transistors and almost four times as fast as the best commercial p-type transistors.
Like other experimental high-performance transistors, the new device derives its speed from its use of a material other than silicon: in this case, germanium. Alloys of germanium are already found in commercial chips, so germanium transistors could be easier to integrate into existing chip-manufacturing processes than transistors made from more exotic materials.
The new transistor also features what's called a trigate design, which could solve some of the problems that plague computer circuits at extremely small sizes (and which Intel has already introduced in its most advanced chip lines). For all these reasons, the new device offers a tantalizing path forward for the microchip industry — one that could help sustain the rapid increases in computing power, known as Moore's Law, that consumers have come to expect.
Pluses and minuses
A transistor is basically a switch: In one position, it allows charged particles to flow through it; in the other position, it doesn't. In an n-type transistor, the particles — or charge carriers — are electrons, and their flow produces an ordinary electrical current.
In a p-type transistor, on the other hand, the charge carriers are positively charged "holes." A p-type semiconductor doesn't have enough electrons to balance out the positive charges of its atoms; as electrons hop back and forth between atoms, trying futilely to keep them electrically balanced, holes flow through the semiconductor, in much the way waves propagate across water molecules that locally move back and forth by very small distances.
"Carrier mobility" measures how quickly charge carriers — whether positive or negative — move in the presence of an electric field. Increased mobility can translate into either faster transistor switching speeds, at a fixed voltage, or lower voltage for the same switching speed.
For decades, each logic element in a computer chip has consisted of complementary n-type and p-type transistors whose clever arrangement drastically reduces the chip's power consumption. In general, it's easier to improve carrier mobility in n-type transistors; the MTL researchers' new device demonstrates that p-type transistors should be able to keep up.
Handling the strain
Judy Hoyt, a professor of electrical engineering and computer science; her graduate students Winston Chern, lead author on the new paper, and James T. Teherani; Pouya Hashemi, who was an MIT postdoc at the time and is now with IBM; Dimitri Antoniadis, the Ray and Maria Stata Professor of Electrical Engineering; and colleagues at MIT and the University of British Columbia achieved their record-setting hole mobility by "straining" the germanium in their transistor — forcing its atoms closer together than they'd ordinarily find comfortable. To do that, they grew the germanium on top of several different layers of silicon and a silicon-germanium composite. The germanium atoms naturally try to line up with the atoms of the layers beneath them, which compresses them together.
"It's kind of a unique set of material structures that we had to do, and that was actually fabricated here, in the MTL," Hoyt says. "That's what enables us to explore these materials at the limits. You can't buy them at this point."
"These high-strain layers want to break," Teherani adds. "We're particularly successful at growing these high-strain layers and keeping them strained without defects." Indeed, Hoyt is one of the pioneers of strained-silicon transistors, a technology found today in almost all commercial computer chips. At last year's IEDM, she and Eugene Fitzgerald, the Flemings-SMA Professor of Materials Science and Engineering at MIT, received the IEEE's Andrew S. Grove Award for outstanding contributions to solid-state devices and technology. The award announcement cited Hoyt's "groundbreaking contributions involving strained-silicon semiconductor materials."
Another crucial aspect of the new transistor is its trigate design. If a transistor is a switch, throwing the switch means applying a charge to the transistor's "gate." In a conventional transistor, the gate sits on top of the "channel," through which the charge carriers flow. As transistors have grown smaller, their gates have shrunk, too. But at smaller sizes, that type of lockstep miniaturization won't work: Gates will become too small to reliably switch transistors off.
In the trigate design, the channels rise above the surface of the chip, like boxcars sitting in a train yard. To increase its surface area, the gate is wrapped around the channel's three exposed sides — hence the term "trigate." By demonstrating that they can achieve high hole mobility in trigate transistors, Hoyt and her team have also shown that their approach will remain useful in the chips of the future.
The MIT researchers' work was supported by the U.S. Defense Advanced Research Projects Agency and the Semiconductor Research Corporation.
Source: Larry Hardesty, MIT News Office
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.