Visit additional Tabor Communication Publications
December 10, 2009
Enables full HT3 bit rate of 6.4 GT/s and 51.2 GB/s aggregate bandwidth up to 2 meters
SUNNYVALE, Calif., Dec. 10 -- The HyperTransport Consortium today released two new HyperTransport connector/cable specifications that enable more innovative ways of implementing and interconnecting HyperTransport links in datacenter and high-performance computing platforms. The specifications define a comprehensive portfolio of high-performance, compact and fully standardized connectors and cables capable of carrying HyperTransport links at their full 3.2 GHz clock rate over distances of up to 2 meters with excellent signal integrity, unmatched by traditional printed circuit boards (PCB) technology. The portfolio enables a new breed of board-to-board, system-to-subsystem, system-to-appliance and chassis-to-chassis interconnect solutions for applications such as motherboards, special function subsystems, servers, blade servers and server clusters.
The new specifications standardize a physical layer complement to the High Node Count (HNC) specification released earlier this year by the HyperTransport Consortium. The HNC specification defines extensions to the HyperTransport 3 protocol that answer the industry challenge of addressing the exponentially increasing number of CPU cores and computing nodes in high performance systems.
"We have evolved HyperTransport from the well established role of high performance chip-to-chip interconnect standard, to a full-fledged role of first and only system-wide interconnect standard capable of fulfilling the industry's most demanding commercial and scientific computing requirements," said Mario Cavalli, general manager of the HyperTransport Consortium. "Together, the HNC and Connector specifications enable highly scalable, heterogeneous, fully hardware-virtualized and modularized resource-sharing computing platforms that support global shared memory architectures. These are best suited to deliver the performance, energy efficiency and cost optimization that datacenter and high performance computing markets need going forward."
The new specifications are the result of collaborative work between the Consortium's Technical Working Group (TWG) and Samtec, Inc., a world leader in high performance interconnect technology and materials and a member of the HyperTransport Consortium.
"HyperTransport technology delivers leading-edge performance that is the perfect match and proving ground for our interconnect technology expertise," said David Givens, director of standards and development manager of Samtec, Inc. "Our close cooperation with the HyperTransport Consortium team has enabled us to develop and standardize state-of-the-art interconnect solutions that we expect will open new, enabling opportunities for system design engineers and scalable computing architects."
The HyperTransport Node Connector Specification defines right angle and vertical mount female cable connectors, as well as a universal male cable connector. The right angle female connector carries 2x independent and stacked 8-bit HyperTransport links in a 30 x 30 x 14.6 mm edge-mount shell for motherboard and add-on cards use. The vertical mount female connector is a 27 x 9 x 8.7 mm small footprint connector that can easily be positioned anywhere on system motherboards or add-on cards and it allows a system's CPU to be directly linked to either in-chassis or external HyperTransport subsystems. Both the right angle and vertical mount female connectors are compatible with the 27 x 25.4 x 6.1 mm universal male cable connector. Either 8-bit link or 16-bit HyperTransport link configurations are supported.
The HyperTransport Mezzanine Connector Specification defines highly compact, vertical mount male and female connectors measuring 55.7 x 8.3 x 10.6 mm and 56.6 x 5.6 x 5 mm respectively and supporting 2x 8-bit or 1x 16-bit HT link configurations and which can be used for stacked, board-to-board connections without the use of cables. The mezzanine connectors carry a number of user definable pins are ideally suited for in-system, add-on function modularity in the form of multi-processor modules, network interface cards, acceleration modules and any special function modules.
The mechanical structure and the signal, ground and power pins allocation of all standardized HyperTransport connectors have been defined to produce the best escape routing PCB designs.
About the HyperTransport Technology Consortium
The HyperTransport Technology Consortium is a membership-based, non-profit organization that licenses, manages and promotes HyperTransport Technology. The HyperTransport Consortium was founded in 2001 by leading technology innovators like AMD, Broadcom, Cisco, NVIDIA and Sun Microsystems and counts several industry-leading members worldwide, including AMD, Broadcom, Cisco, Cray, Dell, HP, IBM, NVIDIA and Sun Microsystems. Consortium membership is based on a yearly fee and it is open to companies interested in licensing the royalty-free use of HyperTransport technology and intellectual property. Consortium members have full access to the HyperTransport technical support database. They may attend Consortium meetings and events and may benefit from a variety of technical and business promotion services that HTC offers at no cost to its members. To learn more about member benefits and how to become a Consortium member, visit the Consortium Web site at http://www.hypertransport.org.
Source: HyperTransport Consortium
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.