Visit additional Tabor Communication Publications
January 17, 2013
Intel Corp showed off its silicon photonics technology at this week's Open Compute Summit (OCP) in Santa Clara, California. The event is a forum for industry players to share technology and ideas for the betterment of datacenter hardware. In this case, betterment means greater efficiency, less complexity and more open standards. Intel believes its silicon photonics technology can drive all of these goals, especially the first two.
The idea behind the silicon photonics is to bring optical communication into the realm of semiconductors, thus leveraging its natural advantages of density, power efficiency, and scalability. Integrating photonics on-chip will also allow for much higher data transfers between processors and devices than is possible with electronics-based optical componentry. All of that translates into hardware that needs fewer parts and is less expensive to build and run.
The OCP presentation, which was delivered by Intel CEO Justin Rattner, featured an Intel-designed prototype "photonic rack" built by Quanta Computer. According to the press release Intel has moved its silicon photonics work beyond the R&D stage, and the company has produced engineering samples that can run at the 100 Gbps speeds. The OCP set-up incorporated this 100G technology, which "enables fewer cables, increased bandwidth, farther reach and extreme power efficiency compared to today's copper based interconnects."
The prototype also makes use of Ethernet switch silicon, presumably based on technology from Fulcrum, which Intel acquired back in 2011. The motherboard design is such that the photonic rack will support Intel's Xeon and the next-generation Atom processors ("Avoton"). It was unclear how the photonics capability was intended to be integrated, although Rattner's presentation slides showed motherboard mock-ups for both Ethernet and PCIe connectivity.
The work is part of a collaboration between Intel and Facebook to build future server racks that enable the disaggregation of compute, switching and storage. The rationale is that once you have the optical communications on-chip, data transfers between computers or between computers and storage become much faster and simpler and require a good deal less energy. That means it makes more sense to build separate boxes for compute and storage, since transfer speed (and latency) is less of a limiting factor. At some point, Intel believes the technology can be used to decouple compute and memory as well.
Although this particular effort was directed at the needs of Facebook and similar types of ultra-scale datacenter applications, silicon photonics technology is also expected to figure prominently in the products aimed at the high performance computing market. And given Intel's plans to also integrate network controllers across their processor line, it's not too hard to see where the company is headed with all of this.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.