Visit additional Tabor Communication Publications
March 13, 2012
HPC installations make use of the best chips available, but performance doesn’t rely solely on processors and memory. The network connecting each compute node serves as the backbone for a supercomputer. Last week in a press release, IBM announced a prototype chip to make that backbone a little stronger.
The Holey Optochip is a parallel optical transceiver chipset capable of transferring 1 terabit of data per second. Compared to copper wires using electrons as the data medium, optical networking provides much higher rates of transfer via pulses of light.
To get some sense of its performance, the IBM press release provides this context: “The raw speed of one transceiver is equivalent to the bandwidth consumed by 100,000 users at today’s typical 10 Mb/s high-speed internet access. Or, it would take just around an hour to transfer the entire U.S. Library of Congress web archive through the transceiver.”
The manufacturing process involves creating 48 holes, called optical-vias, on a single 90-nanometer IBM CMOS die. Optical access is gained through the holes to the back of the chip, where 24 transmit and 24 receive channels are located. Photodiode arrays and vertical cavity surface emitting lasers (VCEL) are then soldered to the chip itself. The resulting Holey Optochip is then ready to couple with a 48-channel multimode fiber array.
For all the positive innovations the company mentions regarding the Optochip, its underlying technology uses parallel optics, where bandwidth is maximized by providing full duplex connectivity over multiple fibers. While this technology can result in higher speeds of connectivity compared to traditional fiber technology, the maximum operating distance is usually less than 150 meters.
Along with transfer speed and manufacturing innovations, IBM also touts the Holey’s power efficiency. The device requires fewer than five watts to operate, which they say is necessary to reducing the power consumption of data communications. According to the researchers, the technology “represents the first practical demonstration of an optical interconnect that attains the efficiency levels that will be required for exascale computers circa 2020.”
IBM also makes a point to mention that the chip components are currently available today, which should result in cheaper production costs. There are no definitive plans to produce the Holey Optochip in the near-term, however the researchers estimate commercialization could take place over the next decade.
Crossing the 1 Tbps threshold will certainly open new doors for a multitude of HPC applications, and is especially relevant for exascale computing. The Holey Optochip’s impressive power consumption and data performance are the key attributes in this regard. However, there is plenty of time before exascale computing becomes a reality, and there are a number of competitive technologies cooking in research labs elsewhere that are offering similar capabilities.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.