Visit additional Tabor Communication Publications
March 09, 2007
If I ever develop an optical interconnect, I'm going to call it LotsaLux. Too cute? I should ask the folks at Lightfleet Corporation. On Monday the company unveiled its own optical interconnect technology called Corowave, whose name is derived from the verb coruscate, which means to sparkle or reflect brilliantly.
Lightfleet's Corowave interconnect uses laser transmitters and opto-electric receivers to support inter-processor communication in a highly parallel fashion. Each compute or storage node contains a transmitter and a receiver. Mirrors and lenses are used to direct the light transmissions to receivers in an all-to-all topology. The all-to-all nature of the Corowave interconnect is the key to the technology.
Chris Kruell, Lightfleet's VP of marketing, says the interconnect can be applied to a range of computer environments -- data center servers, telco equipment, and embedded devices -- anywhere that multiple nodes talk to each other incessantly. The all-to-all interconnect is designed to avoid the congestion and saturation of a traditional interconnect.
By getting rid of the internal crossbar switches and cables, Lightfleet claims it reduces the number of communication components by a factor of 40. This allows the interconnect to be inserted into a relatively compact space -- one third of a cubic foot for a 32-way server. In addition, the all-to-all connectivity allows for flat latency as the number of nodes scales up.
Kruell says any type of technical or commercial application that uses multicast or broadcast communication would benefit, that is, just about any highly parallel workload on a multiprocessor system. For example, if someone wanted to combine data mining with video streaming to do real-time intelligent ad insertion, this type of data communication would be ideal. Another candidate would be a drug interaction simulation that used molecular dynamics introduced into a static mesh simulation. Because you don't know where the next piece of data is coming from, the all-to-all network communication has a tremendous advantage.
The degree of speedup will certainly depend upon the nature of the program. Existing MPI applications that make heavy use of the all-to-all or broadcast functions would be prime targets. But new applications that were specifically designed to take advantage of highly parallelized communication could be the real beneficiaries of Corowave.
"A true all-to-all architecture has not been available before," says Kruell. "So there's going to be a huge speedup potential by optimizing for that."
According to Kruell, another benefit of the Corowave technology is the zero incremental overhead for transmitting one-to-one or one-to-all.
"In a typical cluster today, the approach to multicast is to establish, usually in software, a set of serial point-to-point messages all containing the same thing, which can needlessly consume bandwidth of the I/O processors. The inherent parallel nature of the Corowave interconnect can eliminate these extra data sends and can free up the I/O processors to handle incremental data communications."
The announcement this week was only to get potential customers buzzing about the technology. Lightfleet is planning to incorporate the Corowave interconnect into its own high performance server, which is scheduled for release in July 2007. The company is also looking to license the technology to other OEMs, as yet unnamed.
It'll be interesting to see side-by-side performance comparisons of systems and applications when this technology gets put into real boxes.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - March 08, 2007 @ 9:00 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.