Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
August 31, 2007

Switching Buses

by Michael Feldman

With the launch of the Barcelona quad-core processor scheduled in a couple of weeks, AMD is hoping to salvage a rather miserable 2007 and build some momentum for next year. Regardless of how the Barcelona fares against Intel’s latest Xeon quad-core offerings, the Opterons are still the darlings of the HPC world. Because AMD long ago decided to forego the front side bus (FSB) and discrete memory controllers in favor of its HyperTransport interconnect and an integrated controller, the Opteron line is able to address some key requirements of HPC systems: scalability and memory performance.

But Intel is looking to level the playing field. Up until recently, the company has resisted changing its fundamental architecture in order to preserve investments it made around its FSB technology. But as multicore CPUs become more powerful, the need to alleviate the memory bottleneck and support cache coherent non-uniform memory architectures for multiprocessor (MP) platforms is forcing Intel to mirror AMD’s design. As part of Intel’s next-generation Nehalem microarchitecture in 2008, the company plans to support a HyperTransport-like interconnect called the Common System Interface (CSI), as well as an integrated memory controller. (CSI is apparently just the internal name at Intel; the rumor is that the commercial release will be called “QuickPath.”) Similar to HyperTransport, it will offer a high-bandwidth, low latency, point-to-point interconnect for system components.

At this point, it seems likely that the older FSB design will be retained in lower-end Nehalem processors, such as those destined for PCs, laptops and low-core-count, single-processor servers. But the Xeons targeted for the kinds of servers and workstations used to build high-end systems will almost certainly incorporate the new CSI and on-chip memory controller. These new microprocessors are scheduled to be rolled out in 2008 and 2009.

CSI and on-chip memory controllers will also be used in the next-generation “Tukwila” Itanium processors, which will debut in 2008. Itaniums, like their Xeon brethren, currently rely on large banks of on-chip cache to help circumvent the memory performance limitations inherent in the FSB/discrete memory controller architecture. The better performance provided by this new design should help the Itanium compete against its POWER and Sparc processor rivals.

Intel has released few details of the CSI architecture publicly. But David Kanter, Real World Technologies manager and editor, has managed to piece together a rather detailed description of the CSI design, apparently derived from Intel patent applications. In an analysis published this week, he discusses Intel’s CSI approach and the impact it could have on the x86 market.

Based on the Intel patents, a CSI physical link will be 5, 10 or 20 bits wide, depending upon the nature of the connection. Each link will provide as much as 24 to 32GB/s per link, which is on par with the 20.8 GB/s offered by HyperTransport 3.0 — the latest specification. Like HyperTransport, CSI will have the ability to dynamically configure link resources and optimize power usage.

Kanter believes that the introduction of CSI and on-chip memory controllers could substantially realign the Intel/AMD dichotomy in scaled-up servers. He estimates that Intel currently holds approximately a 50 percent share in multiprocessor servers, compared to 75-80 percent of the total x86 market. It would follow that if AMD were to lose its current architectural advantage in the MP server space, it could see its market share in this area cut by half or more.

Writes Kanter:

To Intel, the launch of a broad line of CSI based systems will represent one of the best opportunities to retake server market share from AMD. New systems will use the forthcoming Nehalem microarchitecture, which is a substantially enhanced derivative of the Core microarchitecture, and features simultaneous multithreading and several other enhancements. Historically speaking, new microarchitectures tend to win the performance crown and presage market share shifts. This happened with the Athlon, the Pentium 4, Athlon64/Opteron, and the Core 2 and it seems likely this trend will continue with Nehalem. The system level performance benefits from CSI and integrated memory controllers will also eliminate Intel’s two remaining glass jaws: the older front side bus architecture and higher memory latency.

No mention was made if Intel is considering a Torrenza-like socket specification to give third-party co-processors access to CSI via an open-standard socket. Although the use of Torrenza is not widespread today, it is gaining some traction, especially in the HPC realm where DRC and XtremeData have built socket-pluggable FPGA co-processor modules for application acceleration. While Intel hasn’t embraced third-party co-processing the way AMD has, a CSI-friendly socket standard would seem to be a logical strategy to counter Torrenza.

Over the next several months, much attention is going to paid to Intel’s next-generation 45nm Penryn processors. They will certainly give Intel the ability to offer a greater range of performance and low-power offerings. But for HPC users, the real revolution is still a year or two away in CSI. If Intel manages to use this technology to close the MP scalability and memory performance gap with its rival, AMD will be forced to innovate in other ways. If you’re Intel or AMD, the competition will be challenging, but the rest of the industry gets to enjoy the benefits.