Because high performance computing lives on the leading edge of information technology, predicting the path of HPC is like forecasting the future of the future. When Cray Research and CDC began selling supercomputers with custom processors in the early '70s, it probably seemed inconceivable that in three decades most high performance computing would be done on the descendants of PC chips. Only using the rear-view mirror of the present can we see that it was all inevitable. The economics of volume chip production, the introduction of cluster and grid computing, the momentum of a rapidly growing software base, and Moore's Law all conspired to propel the x86 into HPC preeminence. Everything else was just noise.
It's easy to identify the visible new trends today. In fact, they're generally the same in HPC as they are in the overall industry: the rise of multi-core and heterogeneous processing, the importance of power consumption, the industry embrace of open source software, virtualization — in all its forms, and the struggle for application parallelization. But which of these, if any, is just noise? And how will all these elements interact?
Predicting winning technology formulas is not just an exercise for the armchair geek. It's the intellectual focus of most IT organizations and informs their most basic business decisions. And while most companies end up just following trends to stay afloat, some actually set them for the rest of the industry. Intel and AMD fall into the latter category.
Even though the two x86 chipmakers are going after the same markets, their underlying technology strategies are diverging. Intel uses its in-house semiconductor and CPU design expertise to be the leader in x86 performance and power-efficiency. Its aggressive two-year cadence of processor shrinks and core redesigns is designed to stay ahead of its rivals on fundamental microprocessor technology. Meanwhile, AMD emphasizes system design to achieve scalability and overall system throughput. The company is also trying to establish an AMD-based ecosystem, using Torrenza and HyperTransport to foster open standards for third party silicon.
While these two chip titans are busy inventing the future, they also are effected by trends they can't control. Late last year, AMD made the biggest strategic decision of its life when it acquired ATI. It saw the future of general-purpose processing as something more than the x86. The company's CPU-GPU Fusion initiative and the ongoing development of discrete GPUs is AMD's way to bring heterogeneous processing in-house. Rumors abound that Intel is working on adding high-end GPUs to its offerings as well. Publicly the chipmaker has been mum on the subject, but the Intel web page that lists job openings for graphics engineers (http://www.intel.com/jobs/careers/visualcomputing/) provides a pretty good indication of the company's intent.
In this week's issue, Intel and AMD offer an outline of their high performance computing strategy — at least the public strategy. Stephen Wheat, senior director of Intel's HPC Business Unit, talks about x86 high performance computing, and how the company's overall strategy fits into that market. Phil Hester, AMD CTO and Bob Drebin, CTO for the AMD's Graphics Products Group, answer questions about how their company's technology roadmap targets future HPC workloads.
What may be most similar about the two companies is their measured devotion to high performance computing. Both organizations have internal HPC units, but these entities have only limit effect on driving overall company strategy. That makes good business sense. With an x86 market nearing $30 billion annually (Mercury Research, 2006), the HPC slice represents just a fraction of that; the entire HPC market is around $10 billion, according to IDC. While high performance computing is important to both companies, it's treated as a leverage poiint for the larger business rather than as an end-point in itself.
“[W]e rarely look at the HPC segment in isolation,” said Intel's Stephen Wheat. “HPC innovation quickly migrates into the enterprise segment. There are many opportunities for HPC to influence offerings in the larger markets.”
The realities of commodity-based HPC are intimately tied to the mega-trend of multi-core processors. This architectural shift means that parallel processing is not just for HPC anymore. All the chipmakers, not just Intel and AMD, are counting on this. In fact, multi-core processing is going to blur the distinction between general purpose and high performance computing. It may be the most profound development in computer hardware since the integrated circuit.
The February edition of CTWatch Quarterly (http://www.ctwatch.org/quarterly/) has devoted the entire issue to the multi-core revolution. It traces the rationale behind the revolution, describes its impact, and outlines the problems this new architecture has created for computing in the 21st century. The four articles in the issue include: The Impact of Multicore on Computational Science Software, The Many-Core Inflection Point for Mass Market Computer Systems, The Role of Multicore Processors in the Evolution of General-Purpose Computing, and High Performance Computing and the Implications of Multi-core Architectures. All are worth reading if you want to understand the paradigm that is shifting beneath your feet.
Pushback on Programming
Apparently my commentary a couple of weeks back, HPC Programming for the Masses, struck a nerve. Professor Marc Snir, head of the CS department of at the University of Illinois Urbana Champaign-Urbana, took exception to my perspective on the relative importance of different programming language models for HPC. The view I put forth was that HPC-enabled versions of domain specific languages such as MATLAB, Excel and SQL will be more important than traditional third generation languages in spreading the commercial use of HPC, since it will broaden the developer base beyond computer scientists.
Snir's point of view is that we should leave programming to the professionals — i.e., software engineers. To be honest, he's in good company. Bjarne Stroustrup, the inventor of C++, expressed the same sentiments in a recent interview for Technology Review . However, Snir also implies that I believe higher level languages will make software engineers redundant. Actually, I never suggested that and certainly don't believe it. As I pointed out in my commentary, most of the domain specific and 4th generation languages are built on 3rd generation technology developed by the programming elite.
Snir does makes some interesting observations about the PGAS languages and the HPCS effort. In the process, the professor also gives us a treatise on an implementation language for HPC. This alone is worth a read.
Oddly enough, Snir circles back around to recognize that application specific languages do represent an important paradigm for HPC.
“High-level languages should match the application domain, not the architecture of the compute platform,” he says. “Developing high-level languages that satisfy the needs of HPC but are less convenient to use on more modest platforms is a waste of money.”
At that point, I'm not sure which side he's really arguing for. Read the article and decide for yourself.
—–
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].