Visit additional Tabor Communication Publications
December 10, 2009
Enables full HT3 bit rate of 6.4 GT/s and 51.2 GB/s aggregate bandwidth up to 2 meters
SUNNYVALE, Calif., Dec. 10 -- The HyperTransport Consortium today released two new HyperTransport connector/cable specifications that enable more innovative ways of implementing and interconnecting HyperTransport links in datacenter and high-performance computing platforms. The specifications define a comprehensive portfolio of high-performance, compact and fully standardized connectors and cables capable of carrying HyperTransport links at their full 3.2 GHz clock rate over distances of up to 2 meters with excellent signal integrity, unmatched by traditional printed circuit boards (PCB) technology. The portfolio enables a new breed of board-to-board, system-to-subsystem, system-to-appliance and chassis-to-chassis interconnect solutions for applications such as motherboards, special function subsystems, servers, blade servers and server clusters.
The new specifications standardize a physical layer complement to the High Node Count (HNC) specification released earlier this year by the HyperTransport Consortium. The HNC specification defines extensions to the HyperTransport 3 protocol that answer the industry challenge of addressing the exponentially increasing number of CPU cores and computing nodes in high performance systems.
"We have evolved HyperTransport from the well established role of high performance chip-to-chip interconnect standard, to a full-fledged role of first and only system-wide interconnect standard capable of fulfilling the industry's most demanding commercial and scientific computing requirements," said Mario Cavalli, general manager of the HyperTransport Consortium. "Together, the HNC and Connector specifications enable highly scalable, heterogeneous, fully hardware-virtualized and modularized resource-sharing computing platforms that support global shared memory architectures. These are best suited to deliver the performance, energy efficiency and cost optimization that datacenter and high performance computing markets need going forward."
The new specifications are the result of collaborative work between the Consortium's Technical Working Group (TWG) and Samtec, Inc., a world leader in high performance interconnect technology and materials and a member of the HyperTransport Consortium.
"HyperTransport technology delivers leading-edge performance that is the perfect match and proving ground for our interconnect technology expertise," said David Givens, director of standards and development manager of Samtec, Inc. "Our close cooperation with the HyperTransport Consortium team has enabled us to develop and standardize state-of-the-art interconnect solutions that we expect will open new, enabling opportunities for system design engineers and scalable computing architects."
The HyperTransport Node Connector Specification defines right angle and vertical mount female cable connectors, as well as a universal male cable connector. The right angle female connector carries 2x independent and stacked 8-bit HyperTransport links in a 30 x 30 x 14.6 mm edge-mount shell for motherboard and add-on cards use. The vertical mount female connector is a 27 x 9 x 8.7 mm small footprint connector that can easily be positioned anywhere on system motherboards or add-on cards and it allows a system's CPU to be directly linked to either in-chassis or external HyperTransport subsystems. Both the right angle and vertical mount female connectors are compatible with the 27 x 25.4 x 6.1 mm universal male cable connector. Either 8-bit link or 16-bit HyperTransport link configurations are supported.
The HyperTransport Mezzanine Connector Specification defines highly compact, vertical mount male and female connectors measuring 55.7 x 8.3 x 10.6 mm and 56.6 x 5.6 x 5 mm respectively and supporting 2x 8-bit or 1x 16-bit HT link configurations and which can be used for stacked, board-to-board connections without the use of cables. The mezzanine connectors carry a number of user definable pins are ideally suited for in-system, add-on function modularity in the form of multi-processor modules, network interface cards, acceleration modules and any special function modules.
The mechanical structure and the signal, ground and power pins allocation of all standardized HyperTransport connectors have been defined to produce the best escape routing PCB designs.
About the HyperTransport Technology Consortium
The HyperTransport Technology Consortium is a membership-based, non-profit organization that licenses, manages and promotes HyperTransport Technology. The HyperTransport Consortium was founded in 2001 by leading technology innovators like AMD, Broadcom, Cisco, NVIDIA and Sun Microsystems and counts several industry-leading members worldwide, including AMD, Broadcom, Cisco, Cray, Dell, HP, IBM, NVIDIA and Sun Microsystems. Consortium membership is based on a yearly fee and it is open to companies interested in licensing the royalty-free use of HyperTransport technology and intellectual property. Consortium members have full access to the HyperTransport technical support database. They may attend Consortium meetings and events and may benefit from a variety of technical and business promotion services that HTC offers at no cost to its members. To learn more about member benefits and how to become a Consortium member, visit the Consortium Web site at http://www.hypertransport.org.
Source: HyperTransport Consortium
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.