Visit additional Tabor Communication Publications
July 16, 2008
A lot of industry people in the know are predicting that Moore's Law will come to an end sometime in the next decade. Starting with the current leading-edge 45nm process technology, chipmakers are looking to deliver three more shrinks until silicon-based transistors run up against quantum mechanical effects. Most vendors have plans in place for 32nm and 22nm processors using UV lithography. The next stop is 16nm, but the general consensus is that it will have to be implemented with something other than CMOS-based material -- perhaps SiGe or graphene. At 9 or 10 nanometers, quantum tunneling starts to become a real problem, so even more futuristic approaches, like molecular electronics or spintronics, will be required.
There's no guarantee that the development of these more advanced technologies will obey a Moore's Law timeline, which was based on the progression of two-dimensional semiconductors. So what's a chipmaker to do? Bernard Meyerson, IBM Fellow and chief technologist for the company's systems and technology group, thinks 3-D chip stacking will be the way to go. In a recent article in Semiconductor International, Meyerson argues that in the future 2D scaling will break down for silicon technologies.
“Density will improve through 3-D stacking and the application of optical technology,” he said. “Some version of Moore’s Law will be followed. We didn’t foresee it would require a vertical perspective. There will be a tremendous focus on 3-D system architecture — logic, cache, memory, I/O subsystem — all optimized and integrated in a single stack.”
Meyerson believes the 3-D route will be the path most chipmakers will pursue, rather than relying on the development of higher risk nanoelectronics. And it actually may help processor architects simplify the designs. In 2D, the microarchitecture had to integrate all the logic, cache and I/O on the same level. Adding an extra dimension means the architects will have a lot more flexibility. In a Forbes interview last month, Myerson talked about some of the possibilities:
There are still many tricks that we can play. When you start looking at the ability to put 10 or 20 chips in a stack, you can re-architect the entire system. The stack is your system. But you can re-architect the stack to be much more effective. Companies brag about the size of the cache. What if the cache was unlimited? What if you could put an entire plane of super high-density memory right above a plane of logic? What if you could put multiple cores on a single level and then reconfigure the wiring between that chip and the one above it?
Optical communication between the chips will be key since it delivers lots more data than electronics with much less power. Using conventional electronics to link stacks of chips would mean communication would be subject to resistive capacitive delay and would probably not be practical. Going optical has the additional advantage of seamless communication across an optical backplane.
Of course 3-D chip stacking has not been perfected, nor have chip-level optical interconnects. But these are manufacturing problems, which should eventually yield to engineering. Nanoelectronic-based transistors, on the other hand, are still in the basic research stage and may remain there for the foreseeable future. In any case, computing will likely continue to shrink into smaller spaces, even after Moore's Law itself yields to the laws of physics.
Posted by Michael Feldman - July 15, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.