The HyperTransport technology was introduced in 2001 to provide a general purpose, low-latency, high bandwidth system interconnect that was designed to overcome some of the shortcomings of shared bus technologies and proprietary interconnects. The HyperTransport Consortium controls the technology specification and drives its development. This non-profit organization maintains HyperTransport as an open standard, available to any vendor willing to become a Consortium member. Since its introduction, the technology has attracted system designers in high performance computing and other IT segments where performance and scalability are of paramount importance.
In the multiprocessor world of supercomputing, the system interconnect is as important as the processors themselves. Since commodity processors are now being used in the majority of these machines, the proprietary interconnect fabric is becoming one of the more expensive elements of the system. Of course, this is not just the case for supercomputers. Servers, network appliances and even desktop systems have a great need for fast data transfers. But in the high-performance realm, the need for a low-latency, high bandwidth interconnects is especially crucial.
“Even though it's used in very high-end systems, it's also used in very low-end PC, with an eye to reducing the cost,” said David Rich, president of the HyperTransport Consortium. “So the technology has to be very accepting of the quality of the signal integrity that's on the board. We can't specify a very expensive board manufacturing regimen to get the speed.”
Originally just used for processor-to-processor connections, HyperTransport now provides processor-to-peripheral links as well. With last month's introduction of the HyperTransport 3.0 specification, it can now be used for system-to-system connections. More about that later.
One of the most prominent uses of HyperTransport in the HPC domain are in AMD's Opteron processors which are the basis of many high performance clusters and supercomputers — for example, Cray's Red Storm system. In fact, the popularity of the Opteron throughout the server and high performance segments has helped to drive adoption of the HyperTransport technology throughout the IT industry. As the number of devices that want to talk directly to Opterons grow, so grows the demand for HyperTransport. The recently announced DRC FPGA cooprocessor also takes advantage of the HyperTransport technology by plugging directly into AMD Opteron sockets.
Another example of this technology in high performance computing is PathScale's InfiniPath adapter. Here HyperTransport has been used to achieve extremely low-latency cluster interconnects (1.29 microseconds MPI latency).
According to Mario Cavalli, general manager for the HyperTransport Consortium, one of the unique strengths of HyperTransport is its processor native interface. This provides simple, but highly efficient chip-to-chip communication that scales with the number of HyperTransport-enabled processors. Unlike front-side bus architectures, which require adapters to connect to standard buses like PCI or AGP, the HyperTransport interconnect is simpler and more flexible.
The technology supports a variable bus width, from 2 to 32 bits; and buses of various widths can be mixed together within a single application. Also, the implementation may choose the clock speed in 200MHz steps, up to a maximum of 2.6GHz in the 3.0 specification. This type of flexibility enables the system designer to specify a hardware implementation that closely matches the desired performance.
Since it was introduced in 2001, the HyperTransport specification has been enhanced in order to increase both speed and functionality. As both CPUs and networks get faster — and as more cores get added to the chip — the need to increase the processor's bandwidth grows according. In addition, as computing systems get more specialized and more complex, there is a corresponding need to have more design flexibility.
The new HyperTransport 3.0 specification was designed to create some headroom in both the bandwidth and flexibility. Specifically, the maximum aggregate bandwidth for the 3.0 specification is 41.6 GB/second, assuming a 32-bit bus at 2.6GHz. This is almost double the maximum bandwidth of the 2.0 specification and more than five times the highest bandwidth currently implemented in an actual product. Fortunately, the new spec maintains the same hardware pinouts; the heavy lifting is done by upgrading the physical signaling methods.
Although no systems even approach this 3.0 bandwidth today, some vendors, like AMD, are undoubtedly making plans for faster implementations. David Rich says that we can expect to see 16-bit HyperTransport implementations make use of 80 to 90 percent of that bandwidth in the not-too-distant future.
The new specification also has some features that are targeted specifically for flexibility. This includes power management, which is supported by dynamically changing the bit widths and clock frequency. The idea is that you use a lot less power at 200MHz and 4 bits than you would at 2.6GHz on 16 bits. So, for example, during those times when a server is doing calculations that are all in cache or memory, and I/O traffic is dead, the HyperTransport link could power down significantly.
There is also an un-ganging feature that allows a connection to be split from one 16-bit link to two 8-bit links. This could naturally be applied to SMP applications. Other applications are possible as well, especially in multiprocessor implementations, where more HyperTransport links are needed for processor-to-processor as well as and processor-to-I/O connections.
It should be noted that the reduction of link width from 16 to 8 bits does not constitute a drawback in processor-to-processor subsystems, where bandwidth is not as important as it is in I/O processing, but latency definitely is. In this case, 8-bit links latency can be just as low as with 16-bit links. An example of this could be the new 8-bit and 16-bit versions of recently introduced coprocessing platforms, such as DRC's FPGA, mentioned earlier.
Perhaps the biggest new feature of HyperTransport 3.0 is the addition of the AC operating mode. This optional mode supports longer runs to blackplanes, cables and other systems. HyperTransport designers decided to add this feature as they saw the increased need for off-board connectivity in larger more complex systems, where memory and processors often scale beyond the board.
“We've made it much easier for people to design systems that have multiple boards and have chassis-to-chassis connections, so that they can physically construct the system as they want,” said Rich. “At full speed we're looking at about a meter. You can back off the speed and go substantially further. But we're not looking at this as even a 'room-area network.' It's basically for the interconnect within a system; but now those systems can get fairly large and complex. HyperTransport is scaling up with that size and complexity. So with HyperTransport 3.0, we can provide yet more headroom for the future in terms of what can be implemented. This puts us comfortably ahead of the requirements of the silicon products over the next couple of years.”