Visit additional Tabor Communication Publications
February 20, 2012
WEST LAFAYETTE, Ind., Feb. 19 -- The smallest transistor ever built - in fact, the smallest transistor that can be built - has been created using a single phosphorous atom by an international team of researchers at the University of New South Wales, Purdue University and the University of Melbourne.
The single-atom device was described Sunday (Feb. 19) in a paper in the journal Nature Nanotechnology.
Michelle Simmons, group leader and director of the ARC Centre for Quantum Computation and Communication at the University of New South Wales, says the development is less about improving current technology than building future tech.
"This is a beautiful demonstration of controlling matter at the atomic scale to make a real device," Simmons says. "Fifty years ago when the first transistor was developed, no one could have predicted the role that computers would play in our society today. As we transition to atomic-scale devices, we are now entering a new paradigm where quantum mechanics promises a similar technological disruption. It is the promise of this future technology that makes this present development so exciting."
The same research team announced in January that it had developed a wire of phosphorus and silicon - just one atom tall and four atoms wide - that behaved like copper wire.
Simulations of the atomic transistor to model its behavior were conducted at Purdue using nanoHUB technology, an online community resource site for researchers in computational nanotechnology.
Gerhard Klimeck, who directed the Purdue group that ran the simulations, says this is an important development because it shows how small electronic components can be engineered.
"To me, this is the physical limit of Moore's Law," Klimeck says. "We can't make it smaller than this."
Although definitions can vary, simply stated Moore's Law holds that the number of transistors that can be placed on a processor will double approximately every 18 months. The latest Intel chip, the "Sandy Bridge," uses a manufacturing process to place 2.3 billion transistors 32 nanometers apart. A single phosphorus atom, by comparison, is just 0.1 nanometers across, which would significantly reduce the size of processors made using this technique, although it may be many years before single-atom processors actually are manufactured.
The single-atom transistor does have one serious limitation: It must be kept very cold, at least as cold as liquid nitrogen, or minus 391 degrees Fahrenheit (minus 196 Celsius
"The atom sits in a well or channel, and for it to operate as a transistor the electrons must stay in that channel," Klimeck says. "At higher temperatures, the electrons move more and go outside of the channel. For this atom to act like a metal you have to contain the electrons to the channel.
"If someone develops a technique to contain the electrons, this technique could be used to build a computer that would work at room temperature. But this is a fundamental question for this technology."
Although single atoms serving as transistors have been observed before, this is the first time a single-atom transistor has been controllably engineered with atomic precision. The structure even has markers that allow researchers to attach contacts and apply a voltage, says Martin Fuechsle, a researcher at the University of New South Wales and lead author on the journal paper.
"The thing that is unique about what we have done is that we have, with atomic precision, positioned this individual atom within our device," Fuechsle says.
Simmons says this control is the key step in making a single-atom device. "By achieving the placement of a single atom, we have, at the same time, developed a technique that will allow us to be able to place several of these single-atom devices towards the goal of a developing a scalable system."
The single-atom transistor could lead the way to building a quantum computer that works by controlling the electrons and thereby the quantum information, or qubits. Some scientists, however, have doubts that such a device can ever be built.
"Whilst this result is a major milestone in scalable silicon quantum computing, it does not answer the question of whether quantum computing is possible or not," Simmons says. "The answer to this lies in whether quantum coherence can be controlled over large numbers of qubits. The technique we have developed is potentially scalable, using the same materials as the silicon industry, but more time is needed to realize this goal."
Klimeck says despite the hurdles, the single-atom transistor is an important development.
"This opens eyes because it is a device that behaves like metal in silicon. This will lead to many more discoveries."
The research project spanned the globe and was the result of many years of effort.
"When I established this program 10 years ago, many people thought it was impossible with too many technical hurdles. However, on reading into the literature I could not see any practical reason why it would not be possible," Simmons says. "Brute determination and systemic studies were necessary - as well as having many outstanding students and postdoctoral researchers who have worked on the project."
Klimeck notes that modern collaboration and community-building tools such as nanoHUB played an important role.
"This was a trans-Pacific collaboration that came about through the community created in nanoHUB. Now Purdue graduate students spend time studying at the University of New South Wales, and their students travel to Purdue to learn more about nanotechnology. It has been a rewarding collaboration, both for the scientific discoveries and for the personal relationships that were formed."
University of New South Wales video on discovery: http://www.youtube.com/watch?v=ue4z9lB5ZHg&feature=youtu.be
Source: Steve Tally, Purdue University
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.