Visit additional Tabor Communication Publications
June 17, 2009
The era of the ever-shrinking transistors may be coming to a end. According to market research and consulting firm iSuppli, Moore's Law is going to run out of money before it runs out of technology. If true, this would be bad news indeed for the IT-industrial complex, since semiconductor components (CPUs, GPUs, memory devices, etc.) depend on Moore's Law for their roadmaps, and many businesses directly or indirectly count on the ensuing technological advancements to drive revenue growth and worker productivity.
Moore's Law, of course, is the observation that the density of transistors on computer chips doubles approximately every two years. Intel co-founder Gordon Moore originally described the trend in a 1965 paper, at a time when transistor densities were actually doubling ever year. More importantly though, Moore observed that the cost per transistor decreased in concert with the shrinking geometries. And it is really this aspect of the model that is breaking.
In fact, it has been apparent for some time that the Moore's Law curve is running counter to the escalating costs of semiconductor manufacturing, which are rising exponentially as process technology shrinks. This is the result of the increased cost of R&D, testing, and the construction of semiconductor fabrication facilities.
The price tag on a new 45nm fab is over a billion dollars today. AMD's new foundry partner, Globalfoundries, is constructing a 32nm fab in New York with a budget of $4.2 billion, and Intel has already committed $7 billion to upgrade its fabs to produce 32nm chips. You have to sell a lot of chips to recoup those kinds of costs. And those are just capital expenditures.
In the iSuppli announcement, Len Jelinek, the firm's director and chief analyst for semiconductor manufacturing, explained it thusly:
"The usable limit for semiconductor process technology will be reached when chip process geometries shrink to be smaller than 20 nanometers (nm), to 18nm nodes. At those nodes, the industry will start getting to the point where semiconductor manufacturing tools are too expensive to depreciate with volume production, i.e., their costs will be so high, that the value of their lifetime productivity can never justify it."
The operative word is "never." The iSuppli study predicted that in 2014, when the 18nm and 20nm process nodes are introduced, there will be no economic incentive to build volume semiconductor components below those geometries.
If true, this will tend to level the playing field for semiconductor vendors and especially fabless chip companies. For example, Intel would lose its current chip manufacturing advantage if everyone was stuck on the same process node. More importantly, if transistor size becomes a constant, much more of the burden of computer advancement will be shifted onto other elements of the ecosystem, mainly the folks that do design -- chip/device, board, system, and even software.
There would also be increased pressure to abandon legacy architectures in favor of more efficient designs that need proportionally less silicon to do comparable work. Products based on x86 processors and Ethernet networks have been able to advance partly thanks to the ever-shrinking semiconductor components upon which they are based. Without that crutch, more advanced processor designs and interconnects may come to the fore.
To a certain extent, this is already occurring in the high performance computing sector. Moore's Law is already too slow to keep up with the performance demand of HPC users, and the difference is being made up by aggregating more chips together and attaching accelerators like GPUs, Cell processors and FPGAs. That's why interconnect technologies have become so important in HPC, which has largely abandoned Ethernet in favor of InfiniBand, and why x86 chips are playing a supporting role on some supercomputers, like the Roadrunner machine at Los Alamos National Lab and the TSUBAME super at Tokyo Tech. I imagine if Moore's Law comes to a halt or even slows down, non-legacy architectures will become more commonplace in HPC and even generally throughout the ecosystem.
Of course, none of this may come to pass. Moore's Law is periodically declared dead and has thus far defied its doomsayers. Additional transistor density may be achieved in other ways, such as 3D semiconductor structures. And there's no shortage of more exotic approaches like carbon nanotubes, silicon nanowires, molecular crossbars, and spintronics. In any case, whatever happens in 2014, we're bound to be living in interesting times.
Posted by Michael Feldman - June 17, 2009 @ 4:16 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.