Visit additional Tabor Communication Publications
December 21, 2007
ZURICH, Switzerland, Dec. 17 -- The performance of tomorrow's supercomputers will be dictated by their ability to exchange large volumes of data instantly between the hundreds of thousands of processors of which they are built. Using optical networks to transfer data throughout the system using light, researchers at IBM and Corning Inc., under a project sponsored by the US Department of Energy/NNSA, have succeeded in demonstrating the world's most advanced and powerful optical packet switch. This novel switch is capable of transmitting 2.5 Terabits of data -- equivalent to 20 high-definition movies -- in a single second.
Today's supercomputers, such as IBM's Blue Gene system, are based on tens of thousands of relatively simple and power-efficient processors that work in parallel to solve a problem collectively. To grow future supercomputing performance and accommodate the resulting spiking data flows in the system, IBM researchers have been investigating the use of light for data transmission--on the chips themselves, between two processors and throughout complex communication networks. Optical data transmission is very promising by virtue of its high capacity, the ability to transfer data with minimum losses over larger distances and low power consumption.
Motivated by these prospects, a team of computer scientists at the IBM Zurich Research Laboratory and optical engineers at the US-based company Corning Inc. set out to design and develop a high-performance optical communication network by focusing on the most critical components -- the switches. The function of a switch is to control data flows and prevent congestion within the complex network of data highways.
As a result of the joint four-year project entitled OSMOSIS (Optical Shared MemOry Supercomputer Interconnect System), IBM and Corning researchers have now demonstrated the most powerful optical packet switch. It combines 64 optical data links, each running at 40 Gigabit per second, which transmit up to 2.5 Terabits per second. For comparison, this corresponds to 20 HD DVD movies in a single second.
"We will need such powerful optical interconnect systems in the future if we want to scale supercomputing capabilities and efficiency well beyond the petaflop range," explains Ronald Luijten, OSMOSIS project leader at IBM's Zurich Research Lab. "Such systems could, for example, accelerate discoveries in the fields of biomedicine and biology, and may even empower computers to design such complex, large-scale systems as new drugs."
One of the main challenges in the development of the optical packet switch is the lack of optical memory, as it is not yet known how to store and retrieve optical data bits easily and in a cost-effective manner. Luijten's team, which was responsible for the switch design, overcame this issue by adopting a hybrid electro-optical approach using electronics to buffer and schedule data and optics -- leading-edge Corning semiconductor optical amplifiers -- for the transmitting and switching processes. They developed a state-of-the-art electronic controller that can compute an optimal switch configuration during each packet slot of 51.2 nanoseconds, thereby operating practically bufferless while maximizing throughput and reliability.
"The function of the controller, which is the intelligence of the switch, is to perform scheduling and resolve contention," explains Luijten. The controller board -- one of the most complex designs ever developed -- was awarded the 2007 Mentor Graphics Award for outstanding circuit board design.
About the IBM Zurich Research Laboratory
The IBM Zurich Research Laboratory (ZRL) is the European branch of IBM Research. This worldwide network of some 3500 employees in eight laboratories around the globe is the largest industrial IT research organization in the world. ZRL's spectrum of research activities ranges from basic science and fundamental research in physics and mathematics, to the development of computer systems and software, to the design of novel business models and services.
Source: IBM Zurich Research Laboratory
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.