Visit additional Tabor Communication Publications
August 11, 2006
With the next-generation AMD Rev F Opteron processors about to hit the streets next week, it might be a good time to take stock of the Opteron-Xeon competition. The new Intel dual-core Woodcrest chips -- officially, the Xeon 5100 Series processors -- are now being supported by most major and minor system OEM players. So what's been the overall impact? To be honest, it's too early to tell. The Woodcrest chips have only been available since June, although the OEMs were prepared for them months in advance of the official launch.
It is probably significant that none of the Tier 1 OEMs have really changed their x86 strategy very much since the beginning of the year. There's a certain amount of momentum built into server development. For example, Sun Microsystems has recently expanded their Opteron lineup, but they already had a substantial commitment to the AMD roadmap. IBM also announced new Opteron-based high performance systems just last week, but Big Blue has played on both sides of the x86 street for a few years now. Likewise, for HP. Dell recently added the Opteron to its four-socket server line, but the company is still essentially an Intel shop. So no Tier 1 came to the conclusion that the new Intel chip was an Opteron-killer. And none of them who were already invested in Xeon technology have forsaken the Intel x86 roadmap. That's to be expected. The big server makers tend to be a conservative bunch, waiting for the other guy to make a mistake.
But this reality hides the fact that the relationship between the two processors has changed. The new Woodcrest processor is vastly better than the previous generation Xeons and there are many indications that it has better performance on a wide variety of applications than the current generation of Opterons. Woodcrest, like all of Intel's new Core architecture microprocessors, uses a 65 nm process; AMD is not expected to move the Opterons to 65 nm until next year. Intel has made other low-level architectural improvements to increase performance and reduce power consumption. So overall, Intel has done an excellent job of addressing the performance and energy-efficiency gap with its latest Core architecture.
What Intel hasn't addressed is SMP scalability. Although both vendors are offering dual-core x86 chips today -- and will soon have quad-core versions -- the other dimension of computational scalability has to do with the ability to increase the number of processors on a board. AMD's HyperTransport (HT) bus technology allow Opterons to inhabit four-socket and eight-socket systems with relative ease. In the near future, 32-socket Opteron boxes will be possible.
The scalability of Opteron-based systems is one of the main reasons why companies, such as Cray, have made long-term commitments to the AMD roadmap. Cray will use Opteron chips in their supercomputer systems until at least 2010. Both Sun Microsystems and Fabric7 Systems are delivering eight-socket Opteron systems to compete with high performance, RISC processor-based machines (see the feature article in this issue describing the Fabric7 solution). Both of these latter companies seem intent on using AMD to go after the high-end server market.
One of the key pieces of technology in all this is AMD's coherent HyperTransport, which, unlike the standard HT, allows for processors to be connected to one another and maintain cache coherency. On the other hand, Intel's legacy front-side bus (FSB) technology limits the number of processors you can comfortably accommodate on a board. There are two-socket and four-socket Xeons today, but the speed of the FSB limits the interprocessor communication performance. In the world of high performance computers and enterprise servers, processor scalability is king. In fact, as soon as the application software catches up with multi-threading, it is likely to become quite important in workstations and PCs as well.
HyperTransport also provides a more flexible communication fabric across a board or even a chassis (in HyperTransport 3.0, chassis-to-chassis). And coherent HT allows system designers to connect all sorts of processors and I/O devices into the fabric. So while the Opteron's dominance in 64-bit processing has been essentially eliminated by the new Intel architecture, the advantages of HyperTransport over FSB remain.
Intel is thought to be developing its own advanced bus technology called CSI, which may or may not stand for Coherent Scalable Interconnect. Whatever it will be labeled, the rumor is that Intel will be offering a next-generation, processor-to-processor interconnect sometime in 2008. The only problem is that this technology appears to be targeted for the future quad-core Itanium processor (Tukwila), not any x86 Xeon products. It's possible the new bus will eventually migrate to the x86 line after it gets established on Itanium. But if Intel fails to provide an advanced processor bus for the Xeon chips, it is hard to envision how Intel's 64-bit x86 offerings will be able to follow AMD into higher end x86 SMP machines.
Perhaps Intel's strategy is just that -- to position only the Itanium for the high-end of the SMP market, leaving Xeon for all lesser tasks (a huge market, by the way). If that's the case, at some point the x86 lines from the two companies will no longer compete directly with each other, at least in much of HPC and enterprise computing. But even this scenario leaves me wondering. Xeon is Intel's high-end x86 processor, so the chipmaker will have to upgrade Xeon's processor bus technology regardless of where it falls in the continuum of the enterprise and HPC markets. Also, I'm guessing Intel is still not convinced that the Itanium will be the 64-bit architecture that carries the day for HPC and mission-critical enterprise computing. The bottom line: Intel needs to ensure that its next generation processor bus is hosted on its most successful chip.
But Intel's fundamental problem is developing the bus technology that can compete against HyperTransport. Not only does AMD have a three-year headstart on Intel, but HyperTransport presents a moving target as it continues to extend its performance and capabilities. This is not to suggest that Intel can't recover. The company has enormous resources and employs highly skilled engineers to keep its chips on the cutting edge. Their latest revamping of the Intel x86 architecture proves they have the potential to leapfrog AMD. Whether they can do so once again remains to be seen. Until then, Intel will have to rely on the advantage it seems to have established in x86 processor performance to create some breathing room.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - August 10, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.