Visit additional Tabor Communication Publications
February 23, 2007
Because high performance computing lives on the leading edge of information technology, predicting the path of HPC is like forecasting the future of the future. When Cray Research and CDC began selling supercomputers with custom processors in the early '70s, it probably seemed inconceivable that in three decades most high performance computing would be done on the descendants of PC chips. Only using the rear-view mirror of the present can we see that it was all inevitable. The economics of volume chip production, the introduction of cluster and grid computing, the momentum of a rapidly growing software base, and Moore's Law all conspired to propel the x86 into HPC preeminence. Everything else was just noise.
It's easy to identify the visible new trends today. In fact, they're generally the same in HPC as they are in the overall industry: the rise of multi-core and heterogeneous processing, the importance of power consumption, the industry embrace of open source software, virtualization -- in all its forms, and the struggle for application parallelization. But which of these, if any, is just noise? And how will all these elements interact?
Predicting winning technology formulas is not just an exercise for the armchair geek. It's the intellectual focus of most IT organizations and informs their most basic business decisions. And while most companies end up just following trends to stay afloat, some actually set them for the rest of the industry. Intel and AMD fall into the latter category.
Even though the two x86 chipmakers are going after the same markets, their underlying technology strategies are diverging. Intel uses its in-house semiconductor and CPU design expertise to be the leader in x86 performance and power-efficiency. Its aggressive two-year cadence of processor shrinks and core redesigns is designed to stay ahead of its rivals on fundamental microprocessor technology. Meanwhile, AMD emphasizes system design to achieve scalability and overall system throughput. The company is also trying to establish an AMD-based ecosystem, using Torrenza and HyperTransport to foster open standards for third party silicon.
While these two chip titans are busy inventing the future, they also are effected by trends they can't control. Late last year, AMD made the biggest strategic decision of its life when it acquired ATI. It saw the future of general-purpose processing as something more than the x86. The company's CPU-GPU Fusion initiative and the ongoing development of discrete GPUs is AMD's way to bring heterogeneous processing in-house. Rumors abound that Intel is working on adding high-end GPUs to its offerings as well. Publicly the chipmaker has been mum on the subject, but the Intel web page that lists job openings for graphics engineers (http://www.intel.com/jobs/careers/visualcomputing/) provides a pretty good indication of the company's intent.
In this week's issue, Intel and AMD offer an outline of their high performance computing strategy -- at least the public strategy. Stephen Wheat, senior director of Intel's HPC Business Unit, talks about x86 high performance computing, and how the company's overall strategy fits into that market. Phil Hester, AMD CTO and Bob Drebin, CTO for the AMD's Graphics Products Group, answer questions about how their company's technology roadmap targets future HPC workloads.
What may be most similar about the two companies is their measured devotion to high performance computing. Both organizations have internal HPC units, but these entities have only limit effect on driving overall company strategy. That makes good business sense. With an x86 market nearing $30 billion annually (Mercury Research, 2006), the HPC slice represents just a fraction of that; the entire HPC market is around $10 billion, according to IDC. While high performance computing is important to both companies, it's treated as a leverage poiint for the larger business rather than as an end-point in itself.
"[W]e rarely look at the HPC segment in isolation," said Intel's Stephen Wheat. "HPC innovation quickly migrates into the enterprise segment. There are many opportunities for HPC to influence offerings in the larger markets."
The realities of commodity-based HPC are intimately tied to the mega-trend of multi-core processors. This architectural shift means that parallel processing is not just for HPC anymore. All the chipmakers, not just Intel and AMD, are counting on this. In fact, multi-core processing is going to blur the distinction between general purpose and high performance computing. It may be the most profound development in computer hardware since the integrated circuit.
The February edition of CTWatch Quarterly (http://www.ctwatch.org/quarterly/) has devoted the entire issue to the multi-core revolution. It traces the rationale behind the revolution, describes its impact, and outlines the problems this new architecture has created for computing in the 21st century. The four articles in the issue include: The Impact of Multicore on Computational Science Software, The Many-Core Inflection Point for Mass Market Computer Systems, The Role of Multicore Processors in the Evolution of General-Purpose Computing, and High Performance Computing and the Implications of Multi-core Architectures. All are worth reading if you want to understand the paradigm that is shifting beneath your feet.
Pushback on Programming
Apparently my commentary a couple of weeks back, HPC Programming for the Masses, struck a nerve. Professor Marc Snir, head of the CS department of at the University of Illinois Urbana Champaign-Urbana, took exception to my perspective on the relative importance of different programming language models for HPC. The view I put forth was that HPC-enabled versions of domain specific languages such as MATLAB, Excel and SQL will be more important than traditional third generation languages in spreading the commercial use of HPC, since it will broaden the developer base beyond computer scientists.
Snir's point of view is that we should leave programming to the professionals -- i.e., software engineers. To be honest, he's in good company. Bjarne Stroustrup, the inventor of C++, expressed the same sentiments in a recent interview for Technology Review . However, Snir also implies that I believe higher level languages will make software engineers redundant. Actually, I never suggested that and certainly don't believe it. As I pointed out in my commentary, most of the domain specific and 4th generation languages are built on 3rd generation technology developed by the programming elite.
Snir does makes some interesting observations about the PGAS languages and the HPCS effort. In the process, the professor also gives us a treatise on an implementation language for HPC. This alone is worth a read.
Oddly enough, Snir circles back around to recognize that application specific languages do represent an important paradigm for HPC.
"High-level languages should match the application domain, not the architecture of the compute platform," he says. "Developing high-level languages that satisfy the needs of HPC but are less convenient to use on more modest platforms is a waste of money."
At that point, I'm not sure which side he's really arguing for. Read the article and decide for yourself.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - February 22, 2007 @ 9:00 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.