In San Francisco this week, Intel execs evangelized the company's vision of the future of computing. CEO Paul Otellini used the Intel Developer Forum (IDF) as a platform to present the overall product roadmap for the next four years and beyond. CTO Justin Rattner talked about their long-range terascale processor development and the new types of applications that will be using this advanced technology.
In the near-term, Intel is planning to accelerate its micro-architecture design cycle by producing new core architectures every two years, instead of the four to six that they've done in the past. He displayed a chart that mapped new micro-architectures coming in 2008 (code-named Nehalem and targeted at 45nm silicon manufacturing technology), followed by another in 2010 (code-named Gesher and targeted at 32nm). Otellini said they're on track to use their 45nm technology in new products starting in the second half of 2007.
Intel, which has been shipping 65nm processors since June, has perhaps a six-month lead over rival AMD in silicon manufacturing technology. The first Opteron and Athlon processors on 65nm technology are not expected until the end of this year. Intel's process technology advantage may be further extended when it jumps to 45nm in 2007. But AMD, with its HyperTransport technology, is forcing its larger rival to play catch-up in the processor interconnect arena. At the IDF, Intel said very little about the roadmap for its CSI technology. CSI, which stands for Common System Interconnect or Common System Interface depending on who you talk to, is allegedly Intel's answer to HyperTransport, but apparently was not worth talking about yet.
In the meantime, Intel is promoting an open-standard interconnect technology called “Geneseo,” which is characterized as an extension to the popular PCI Express. Like AMD's HyperTransport-based Torrenza initiative, Geneseo is designed to allow other vendors to attach special-purpose acceleration processors (e.g., numerical co-processor, XML engines and encryption/decryption devices) to the host processor. Intel is working with a number of partners on this technology, including IBM, but it's unclear when Geneseo will see commercial application.
The most forward-looking presentation at the IDF came from Intel CTO Justin Rattner. He revealed some of the details of the work being done by the company's Tera-scale Computing Research Program. In a departure from commercial designs, Rattner described a prototype that contained 80 RISC-like processing cores arranged in a tiled fashion and bonded to a vertical stack of memory chips. According to Rattner, this type of three-dimensional configuration allows thousands of interconnects which can sustain memory-to-processor transfer rates of terabytes per second. Intel's recently demonstrated hybrid silicon laser would be employed for terabit/second connectivity to other processors, I/O devices and even other systems. The whole idea is to produce a terascale processor, a device that will deliver a teraflop of performance and have access to terabytes/second of bandwidth — literally a supercomputer on a chip. The technology could be commercialized with the next five years.
Such a chip is not destined for traditional supercomputing. Intel does not see nuclear weapons simulations or global climate modeling as a growth industry. A terascale processor would presumably find a comfortable home in large-scale data centers, where multi-threaded Web service applications are all the rage. This technology could also propel new application markets and it is here where Intel sees the path to high-volume production. Not content with a build-it-and-they-will-come approach, Intel has already broadly defined the classes of applications that would inhabit such processors. They are called RMS: Recognition, Mining and Synthesis. Intel defines them as follows:
Recognition: Machine-learning capabilities that allow computers to examine data (text, images, video, audio, etc.) and construct mathematical models based on what they identify. An example would be constructing a model of the face of a specific person.
Mining: The capability to sift through large amounts of real-world data related to the patterns or models of interest. Put more simply, it is the ability to find an instance of a specific model amidst a large volume of data. For example, mining could entail finding a particular person's face from a large number of images of various resolutions, lighting environments, and so on.
Synthesis: The capability to explore theoretical scenarios by constructing new instances of a model. For example, this could be projecting what a person's face might look like if they were younger or older.
One example of an application that incorporates these capabilities would be a “smart” car that could drive itself to a destination (like picking you up and taking you home). Another example might be a Web service that allowed you to do virtual clothes shopping across the Internet, enabling you to “try on” individual items and see how they looked on you along with other items in your current wardrobe.
Imagine the economic effects that would result from these two rather simple examples. The smart car would eliminate cabs, driving schools, traffic officers and most of the DMV, as well as revolutionize commercial ground transport. The second example would accelerate the demise of brick and mortar clothing stores, change the profile of shopping malls and could lead to designer clothing for the middle class.
Justin Rattner wrote about the emergence of these new application domains in his recent blog entry:
“Such emerging 'killer apps' of the future have a few important attributes in common – they are highly parallel in nature, they are built from a common set of algorithms, and they have, by today's standards, extreme computational and memory bandwidth requirements, often requiring teraFLOPS of computing power and terabytes per second of memory bandwidth, respectively. Unfortunately the R&D community is lacking a suite of these emerging, highly-scalable workloads in order to guide the quantitative design of our future computing systems.”
Because of this deficiency, Intel has taken it upon itself to help build a new software culture that is focused around parallel programming. A company white paper, “From a Few Cores to Many, A Tera-scale Computing Research Overview,” reflects the company's mission to convert the programming masses to HPC. It states:
“In the tera-scale future, software should be designed to use available parallelism to gain the performance benefit of the increased numbers of cores. This requires that software developers design parallel programs, a traditionally time-consuming and error-prone task which requires developers to think differently than the way they do today. Teaching mainstream and future developers to identify and then effectively exploit parallelism is something Intel must foster if these skills are to move from a narrow domain of high-performance computing (HPC) experts into the mainstream.”
If the future of computing is high performance computing, then the vendors that figure out a way to drive this technology into the mainstream will dominate IT and relegate their rivals to niche markets or worse. Killer applications indeed.
To read Justin Rattner's blog, visit http://blogs.zdnet.com/OverTheHorizon/.
For more information on Intel's Tera-Scale research program, download the white paper at ftp://download.intel.com/research/platform/terascale/terascale_overview_paper.pdf.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].