Visit additional Tabor Communication Publications
December 21, 2007
ZURICH, Switzerland, Dec. 17 -- The performance of tomorrow's supercomputers will be dictated by their ability to exchange large volumes of data instantly between the hundreds of thousands of processors of which they are built. Using optical networks to transfer data throughout the system using light, researchers at IBM and Corning Inc., under a project sponsored by the US Department of Energy/NNSA, have succeeded in demonstrating the world's most advanced and powerful optical packet switch. This novel switch is capable of transmitting 2.5 Terabits of data -- equivalent to 20 high-definition movies -- in a single second.
Today's supercomputers, such as IBM's Blue Gene system, are based on tens of thousands of relatively simple and power-efficient processors that work in parallel to solve a problem collectively. To grow future supercomputing performance and accommodate the resulting spiking data flows in the system, IBM researchers have been investigating the use of light for data transmission--on the chips themselves, between two processors and throughout complex communication networks. Optical data transmission is very promising by virtue of its high capacity, the ability to transfer data with minimum losses over larger distances and low power consumption.
Motivated by these prospects, a team of computer scientists at the IBM Zurich Research Laboratory and optical engineers at the US-based company Corning Inc. set out to design and develop a high-performance optical communication network by focusing on the most critical components -- the switches. The function of a switch is to control data flows and prevent congestion within the complex network of data highways.
As a result of the joint four-year project entitled OSMOSIS (Optical Shared MemOry Supercomputer Interconnect System), IBM and Corning researchers have now demonstrated the most powerful optical packet switch. It combines 64 optical data links, each running at 40 Gigabit per second, which transmit up to 2.5 Terabits per second. For comparison, this corresponds to 20 HD DVD movies in a single second.
"We will need such powerful optical interconnect systems in the future if we want to scale supercomputing capabilities and efficiency well beyond the petaflop range," explains Ronald Luijten, OSMOSIS project leader at IBM's Zurich Research Lab. "Such systems could, for example, accelerate discoveries in the fields of biomedicine and biology, and may even empower computers to design such complex, large-scale systems as new drugs."
One of the main challenges in the development of the optical packet switch is the lack of optical memory, as it is not yet known how to store and retrieve optical data bits easily and in a cost-effective manner. Luijten's team, which was responsible for the switch design, overcame this issue by adopting a hybrid electro-optical approach using electronics to buffer and schedule data and optics -- leading-edge Corning semiconductor optical amplifiers -- for the transmitting and switching processes. They developed a state-of-the-art electronic controller that can compute an optimal switch configuration during each packet slot of 51.2 nanoseconds, thereby operating practically bufferless while maximizing throughput and reliability.
"The function of the controller, which is the intelligence of the switch, is to perform scheduling and resolve contention," explains Luijten. The controller board -- one of the most complex designs ever developed -- was awarded the 2007 Mentor Graphics Award for outstanding circuit board design.
About the IBM Zurich Research Laboratory
The IBM Zurich Research Laboratory (ZRL) is the European branch of IBM Research. This worldwide network of some 3500 employees in eight laboratories around the globe is the largest industrial IT research organization in the world. ZRL's spectrum of research activities ranges from basic science and fundamental research in physics and mathematics, to the development of computer systems and software, to the design of novel business models and services.
Source: IBM Zurich Research Laboratory
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.