Visit additional Tabor Communication Publications
December 05, 2011
Prototypes developed for first time in real-world manufacturing environments are critical step towards transferring research into commercial devices.
WASHINGTON DC, Dec. 5 -- Today at IEEE International Electron Devices Meeting, IBM (NYSE: IBM) scientists unveiled several exploratory research breakthroughs that could lead to major advancements in delivering dramatically smaller, faster and more powerful computer chips.
For more than 50 years, computer processors have increased in power and shrunk in size at a tremendous rate. However, today’s chip designers are hitting physical limitations with Moore’s Law, halting the pace of product innovation from scaling alone.
With virtually all electronic equipment today built on complementary-symmetry metal–oxide–semiconductor (CMOS) technology, there is an urgent need for new materials and circuit architecture designs compatible with this engineering process as the technology industry nears physical scalability limits of the silicon transistor.
Following years of key physics advances previously only achieved in a laboratory, IBM scientists successfully integrated the development and application of new materials and logic architectures on 200mm (eight inch) diameter wafers. These breakthroughs could potentially provide a new technological basis for the convergence of computing, communication, and consumer electronics.
· Racetrack memory combines the benefits of magnetic hard drives and solid-state memory to overcome challenges of growing memory demands and shrinking devices.
· Proving this type of memory is feasible, today IBM researchers are detailing the first Racetrack memory device integrated with CMOS technology on 200mm wafers, culminating seven years of physics research.
· The researchers demonstrated both read and write functionality on an array of 256 in-plane, magnetized horizontal racetracks. This development lays the foundation for further improving Racetrack memory’s density and reliability using perpendicular magnetized racetracks and three-dimensional architectures.
· This breakthrough could lead to a new type of data-centric computing that allows massive amounts of stored information to be accessed in less than a billionth of a second.
· This first-ever CMOS-compatible graphene device can advance wireless communications, and enable new, high frequency devices, which can operate under adverse temperature and radiation conditions in areas such as security and medical applications.
· The graphene integrated circuit, a frequency multiplier, is operational up to 5 GHz and stable up to 200 degrees Celcius. While detailed thermal stability still needs to be evaluated, these results are promising for graphene circuits to be used in high temperature environments.
· New architecture flips the current graphene transistor structure on its head. Instead of trying to deposit gate dielectric on an inert graphene surface, the researchers developed a novel embedded gate structure that enables high device yield on a 200mm wafer.
· IBM researchers today demonstrated the first transistor with sub-10 nm channel lengths, outperforming the best competing silicon-based devices at these length scales.
· While already being considered in varied applications ranging from solar cells to displays, it is expected that computers with in the next decade will use transistors with a channel length below 10 nm, a length scale at which conventional silicon technology will have extreme difficulty performing even with new advanced device architectures. The scaled carbon nanotube devices below 10nm gate length are a significant breakthrough for future applications in computing technology.
· While often associated with improving switching speed (on-state), this breakthrough demonstrates for the first time that carbon nanotubes can provide excellent off-state behavior in extremely scaled devices-- better than what some theoretical estimates of tunneling current suggested.
IBM and Nanotechnology Leadership
“Throughout its history, IBM’s continued investment in scientific research to identify new materials and processes has not only extended current technologies but is providing a sustainable technology foundation for tomorrow,” said T.C. Chen, vice president, Science and Technology, IBM Research. “Today's breakthroughs challenge the status quo by exploring the boundaries of science and transforming that knowledge into information technology systems that could advance the power and capability of businesses worldwide.”
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.