At a recent Q&A roundtable for journalists, Intel senior VP Pat Gelsinger laid out his vision of the future world of computing. Not surprisingly, Intel was at the center of that world. His four big predictions for the next several years are based on a continuation of today’s trends: the continuation of Moore’s Law (at least to 10nm), more cores on the processor, the overriding importance of software compatibility, and the ubiquity of Intel Architecture (IA) microprocessors. He thinks Intel is in a great position to exploit all of these trends.
First, since Intel is leading the arms race in process technology — being the first one to reach and exploit 45nm on a large scale — the company is in the enviable position of setting the pace for its competition. Having the smallest transistors is a fundamental advantage for a chip vendor, since it enables lower power consumption, faster processor clocks, and allows manufacturing to stamp out more processors on a given patch of silicon (a standard wafer is currently 300mm). Also, Gelsinger correctly notes that over the next decade, the multi-billion dollar expense of building new fabs to produce 450mm wafers will thin the ranks of fab companies substantially. He sees the number of such companies to be reduced from tens to single digits. Intel will certainly be among them.
The company thinks it’s now in a position to expand its IA business into new markets. Undoubtedly, part of the reason it wants to do this is that growth in the desktop and even laptop markets is slowing, while mobile computing and high performance technical computing is taking off. The company is attempting an aggressive move into the mobile and embedded space with its Atom processor, hoping to exploit some of its low-power knowhow with smaller platforms. At the same time, Intel is looking to attack the high-end (HPC and graphics) with its upcoming Larrabee processor.
Neither market is a sure bet for Intel, since the company lacks experience in either the mobile or high-end graphics space, although it’s certainly well-established in HPC. In any case, Gelsinger is betting a standard IA platform with reams of software behind it will trump any marginal efficiencies offered by competing platforms. This may not be as true for the mobile/embedded space as Intel would like to believe. Here, both the hardware and the software must be lightweight to survive; Linux on RISC hardware is well-established.
Larrabee is another story since its future is inextricably linked to the success of GPGPU (and processor acceleration in general). Apparently when the subject of GPGPU is brought up, Gelsinger can’t help but trash-talk the competition a bit. In a Custom PC article on Monday, author Ben Hardwidge quotes Gelsinger as saying that GPGPU languages like NVIDIA’s CUDA will one day be nothing more than “interesting footnotes in the history of computing annals.” He takes a shot at the IBM Cell processor as well, noting, “It promised to be this radical new computing architecture, and basically years later the application programmers have barely been able to comprehend how to write applications for it.”
Not so for his manycore Larrabee, says Gelsinger. That architecture promises to be IA compatible, along with cool new vector/graphics instruction set extensions. The idea is that by merging the essential capabilities of the GPU with the CPU, developers can now do high-end visualization or technical computing on-chip within a standard programming environment. Since the graphics/vector processor is now integrated, the external bus bottleneck to a discrete GPU is eliminated. Gelsinger says this approach, not GPGPU, is the long-term solution to delivering high performance data parallelism to the masses.
Of course, the underlying assumption is that not only will the IA software ecosystem continue to dominate the industry, but it can also be used as a lever to lift Intel into new markets. Not a bad strategy, but certainly not foolproof. If we’re destined to have x86-based nanocomputers coursing through our bloodstream someday, so be it. Maybe the era of x86 will last another 30 years. But it’s hard to believe that the future of computing has already been invented.