Visit additional Tabor Communication Publications
September 07, 2007
Will AMD find true happiness in Barcelona? Will Xeon break Opteron's heart? What evil lurks beneath the Front Side Bus? If this sounds like the premise for some weird, high-tech soap opera, that's because it is. The Intel-AMD feud has been going on for over 20 years and the participants show no signs of reconciliation. Since 1987, the two chipmakers have tangled each other in litigation, verbally attacked one another in public, and just expressed a genuine distaste for each other's existence.
The level of animosity has ebbed and flowed over the years, but their tolerance for each other seems to be inversely proportional to one another's success. As the September 10th coming-out party for AMD's Barcelona processor approaches, industry watchers are being treated to almost daily episodes of Intel and AMD exchanging barbs.
As expected, this week Intel launched its new quad-core Xeon MP chips for multiprocessor servers. The Xeon 7300 series, code-named "Tigerton," represents the last Intel product line to receive the new Core microarchitecture. The chips are targeted for high-end x86 servers that use four or more processors per box. Intel also delivered two dual-core Tigertons, the 2.93GHz E7220 and the 2.4GHz E7210. As I discussed last week, the Intel announcement seems to be timed to upstage AMD's launch of its new Barcelona processors next Monday.
AMD immediately shot back with a public response to the Tigerton announcement, sent out to a number of media outlets, including HPCwire. Here's an excerpt:
... Intel falls short yet again with "Tigerton" at the high-end of the x86 server industry.
The AMD Direct Connect Architecture, introduced in 2003, and the record-breaking scalability and energy-efficient performance it enables in AMD Opteron, has shown the industry that an integrated memory controller and high-speed direct connections between processor cores, memory, and I/O is the gold standard for x86 processor design. Intel will finally transition to its version of this architecture in late 2008 according to its public statements -- five years after AMD introduced this x86 design. Thus, "Tigerton" has the unfortunate distinction of being near last in a line of a dying architecture based on a Front Side Bus bottleneck. Nowhere are the limitations of a Front Side Bus architecture more keenly felt than in the high-end Multi-Processor server market. So while Intel may publicly "celebrate" the arrival of Tigerton, it is in fact the final inadequate attempt by Intel to make the Front Side Bus architecture scale.
Tigerton is still a dual-core processor design, just as "Penryn" will be. Intel won't offer a quad-core processor design until late 2008, more than a year after AMD. To achieve full performance scaling on real world multi-threaded workloads, real design work is needed. Packaging dual-cores together into quad-cores is insufficient, as clearly Intel itself understands. Why else transition to native quad-core in late 2008?
Multi-processor servers are memory intensive machines, which mean more DIMMs of memory. And while AMD continues to utilize DDR2 memory technology, Intel relies on FB-DIMM memory, which consume an average of 4-6 watts more power per memory DIMM. Independent reviewers like Neal Nelson, AnandTech and InfoWorld all show AMD with a significant advantage in energy efficiency.
This scales proportionately to the number of processors and attending increase in memory modules With a rating of 130 Watts typical for each processor, plus the FBDIMM and memory controller power penalties, power consumption and thermals go up dramatically from Intel dual core processors with Tigerton; this is not the direction customers want to head. AMD "Barcelona" offers the same power/thermals as our current dual-core processors.
There is no clean upgrade path for existing Intel "Tulsa" MP systems, meaning customers continue to face greater disruption and complexity from Intel processor roadmaps. And additional platform disruption looms ahead for 2008 as Intel finally scraps the Front Side Bus.
Virtualization is perhaps the world's most memory-intensive application, meaning the Front Side Bus bottleneck is a liability. And because virtualization loves memory, any "Tigerton" based system will be loaded with power-hungry FB DIMMs. "Tigerton" for virtualization means more money to power, more money to cool, with a memory bottleneck.
The level of rhetoric reflects the stakes for AMD. The success of its new quad-core products are central to the company's ability to become profitable once again and regain lost market share. And it's important that AMD do so this year, since in 2008 Intel will introduce the Common System Interface (CSI) -- Intel's answer to HyperTransport. As I suggested last week, CSI will be the real challenge for AMD in 2008 and 2009. Until then, the company will need to string together some strong quarters of financial growth so that it can invest in new technology to be used to differentiate their products in the years ahead.
What may not be so intuitive is the importance of AMD's success to the wider industry. Not only has AMD served as a counterweight to Intel's dominance, but it has also spurred true innovation in x86 technology. Prior to the invention of 64-bit x86 computing in 2003 by AMD, Intel was content to focus its 64-bit efforts on the Itanium. As a response to the success of the 64-bit Opteron and Athlon, Intel was forced to follow its rival's lead and develop its own 64-bit x86 architecture. Without AMD, a 64-bit version of x86 might never have happened. I'll leave it to the reader to decide if this was a leap forward or not.
AMD went on to develop a new system architecture around the x86 core, adding HyperTransport as a high performance system interconnect and integrated memory controllers to improve system performance. Playing catch-up, Intel redesigned the microarchitecture, which debuted in 2006 under the Core moniker, and is in the process of redesigning the system architecture with CSI. The competition also compelled Intel to establish an aggressive "tick-tock" cycle for its x86 products: shrinking the process technology one year, followed by a new microarchitecture design the following year. Neither the Itanium nor other Intel chips are on such a fast-paced development schedule. It's difficult to imagine the server landscape today without multicore 64-bit x86 processors, especially in the high performance computing space.
It should be no surprise that when multiple companies have access to the same processor architecture, customers have better choices and the processor ecosystem expands for everyone. This has certainly happened for x86 users. Other examples are PowerPC and MIPS. Both architectures are licensed across a variety of vendors and this has resulted in two of the most widely used processor families today. It's doubtful this would have occurred if those two architectures were each being productized by a single vendor. Along those lines, Sun Microsystems has created an open source hardware project for its multicore UltraSPARC T1 and T2 processors in an effort to rapidly expand the ecosystem around those architectures.
For companies like Intel and AMD, whose chips go head-to-head against each other, they must find a way to advance their products at the expense of their competitor. As a result, Intel and AMD are relatively more concerned about expanding market share than they are about expanding the market. Companies like Sun and IBM, who combine chipmaking with servermaking, are more motivated to stimulate the ecosystem around their processor architectures since these companies sell unique, high-value systems and services based on those architectures. In that respect, the AMD-Intel relationship is unique and creates the special conditions for the kind of knock-down-drag-out fight that we've become accustomed to.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - September 06, 2007 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.