Visit additional Tabor Communication Publications
December 07, 2007
Advancement in using light instead of wires for building supercomputers-on-a-chip
YORKTOWN HEIGHTS, N.Y., Dec. 6 -- Supercomputers that consist of thousands of individual processor "brains" connected by miles of copper wires could one day fit into a laptop PC, thanks in part to a breakthrough by IBM scientists announced today.
And while today's supercomputers can use the equivalent energy required to power hundreds of homes, these future tiny supercomputers-on-a-chip would expend the energy of a light bulb.
In a paper published in the journal Optics Express, the IBM researchers detailed a significant milestone in the quest to send information between multiple cores -- or "brains" -- on a chip using pulses of light through silicon, instead of electrical signals on wires. The breakthrough -- known in the industry as a silicon Mach-Zehnder electro-optic modulator -- performs the function of converting electrical signals into pulses of light. The IBM modulator is 100 to 1,000 times smaller in size compared to previously demonstrated modulators of its kind, paving the way for many such devices and eventually complete optical routing networks to be integrated onto a single chip. This could significantly reduce cost, energy and heat while increasing communications bandwidth between the cores more than a hundred times over wired chips.
"Work is underway within IBM and in the industry to pack many more computing cores on a single chip, but today's on-chip communications technology would overheat and be far too slow to handle that increase in workload," said Dr. T.C. Chen, vice president, Science and Technology, IBM Research. "What we have done is a significant step toward building a vastly smaller and more power-efficient way to connect those cores, in a way that nobody has done before."
Today, one of the most advanced chips in the world -- IBM's Cell processor which powers the Sony Playstation 3 -- contains nine cores on a single chip. The new technology aims to enable a power-efficient method to connect hundreds or thousands of cores together on a tiny chip by eliminating the wires required to connect them. Using light instead of wires to send information between the cores can be 100 times faster and use 10 times less power than wires.
"We believe this is a major advancement in the field of on-chip silicon nanophotonics," said Dr. Will Green, the lead IBM scientist on the project. "Just like fiber optic networks have enabled the rapid expansion of the Internet by enabling users to exchange huge amounts of data from anywhere in the world, IBM's technology is bringing similar capabilities to the computer chip."
IBM's optical modulator performs the function of converting a digital electrical signal carried on a wire, into a series of light pulses, carried on a silicon nanophotonic waveguide. First, an input laser beam is delivered to the optical modulator, which acts as a very fast "shutter" which controls whether the input laser is blocked or transmitted to the output waveguide. When a digital electrical pulse arrives from a computer core to the modulator, a short pulse of light is allowed to pass through at the optical output. In this way, the device "modulates" the intensity of the input laser beam, and the modulator converts a stream of digital bits ("1"s and "0"s) from electrical signals into light pulses.
The report on this work, entitled "Ultra-compact, low RF power, 10 Gb/s silicon Mach-Zehnder modulator" by William M. J. Green, Michael J. Rooks, Lidija Sekaric, and Yurii A. Vlasov of IBM's T.J.WatsonResearch Center in Yorktown Heights, N.Y. is published in Volume 15 of the journal Optics Express. This work was partially supported by the Defense Advanced Research Projects Agency (DARPA) through the Defense Sciences Office program "Slowing, Storing and Processing Light".
IBM's Chip Innovation Leadership
Today's announcement by IBM bookends a decade of innovation from IBM Labs that have transformed the IT industry with new materials and design architectures to build smaller, more powerful and energy efficient chips.
IBM's pioneering work to move the industry from aluminum to copper wiring, unveiled in 1997, gave the industry an immediate 35 percent reduction in electron flow resistance and a 15 percent boost in chip performance.
Since then, IBM scientists have continued to drive performance improvements to continue the path of Moore's Law. And in 2007 alone, IBM announced:
High-k metal gates (January 2007): a solution to one of the industry's most vexing problems -- transistors that leak current. By using new materials IBM will create chips with "high-k metal gates" that will enable products with better performance that are both smaller and more power efficient.
eDRAM (February 2007) -- By replacing SRAM with an innovative new type of speedy DRAM on a microprocessor chip, IBM will be able to more than triple the amount of embedded memory and boost performance significantly.
3-D Chip Stacking (April 2007) -- IBM announces the creation of three-dimensional chips using "through-silicon vias," allowing semiconductors to be stacked vertically instead of being placed near each other horizontally. This cuts the length of critical circuit pathways by up to 1,000 times.
Airgap (May 2007) -- Using a "self assembly" nanotechnology IBM has created a vacuum between the miles of wire inside a Power Architecture microprocessor reducing unwanted capacitance and improving both performance and power efficiency.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.