Visit additional Tabor Communication Publications
November 17, 2010
Lost in the hoopla about the ascendency of China and GPGPUs in the TOP500 is the continuing saga of the InfiniBand-Ethernet interconnect rivalry. In the latest TOP500 list, the number of InfiniBand and Ethernet connected supercomputers is now nearly the same -- 215 for InfiniBand and 227 for Ethernet. But that's a 18 percent increase for the former and a 14 decrease for the latter compared to last year. Only seven 10 Gigabit Ethernet-based supercomputers made the current list, although that's up from only one such system last year.
For the top 100 systems, which encompass the majority of the FLOPS on the list, the numbers skew quite heavily in favor of InfiniBand. For these elite machines, InfiniBand connects 61 percent of them, while Ethernet manages just a one-percent share. For the petaflop machines, InfiniBand is employed in 57 percent of them, with custom interconnects used in the remainder.
But proprietary interconnects have a substantial presence at the top. For example, vendor-specific system networks from Cray and IBM BlueGene are used in 23 percent and 9 percent, respectively, for the top 100. Even the number one Tianhe-1A system is using its own home-grown interconnect.
InfiniBand's growth in the big systems follows a trend that's been building for years. But that trend is not so much InfiniBand-based per se, but a more general movement toward system networks with the lowest possible latencies and the highest possible bandwidths.
In fact, at a press briefing here at SC10 on Tuesday, IDC's Earl Joseph predicted that at the high end of the supercomputing market, use of high-performance custom interconnects, such as EXTOLL, will actually expand. According to him, there are six vendors working on new supercomputing interconnects, with EXTOLL representing the only one that is publicly known.
The driving force is the escalating processor and core counts on these big machines. Connecting them together so they behave as one requires greater and greater performance from the network fabric. So while most of these petascale supercomputers are likely to be based on standard CPU and GPU architectures, it may end up that there is no dominant interconnect. But with InfiniBand speeds bumping up to 56 Gbps (4X FDR) in 2011 and 104 Gbps (4X EDR) in 2012, the technology will certainly be a big player in the petascale space for the foreseeable future.
Speaking of GPUs, even though the machines accelerated by graphics chips made a great showing this year on the TOP500, their Linpack efficiency seems stuck at around 50 percent. Help is on the way though. NVIDIA's GPUDirect technology (with the support of network adapters) should push those efficiency numbers up significantly. Of course, the idea is not to just get better Linpack. GPUDirect can bypass system memory copies, thus eliminating a lot of CPU overhead, which should speed up nearly all GPU computing applications. NVIDIA says the technology could crank up data transfer performance by as much as 30 times. That may be reason enough to check the InfiniBand box when building these big GPGPU supercomputers.
The battle for market share between InfiniBand and custom interconnects at the high end is shaping up to be a rather interesting rivalry -- in some ways a more interesting one than the InfiniBand-10GbE battle for less well-endowed machines. In the meantime, we can look forward to a more rich and diverse interconnect landscape than the one we have today.
Posted by Michael Feldman - November 17, 2010 @ 5:51 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.