Visit additional Tabor Communication Publications
March 12, 2013
WASHINGTON, D.C., March 12 — Next week, leaders from telecom, datacom and computing will convene for the Optical Fiber Communication Conference and Exposition/National Fiber Optic Engineers Conference (OFC/NFOEC) — the premier optical communications event where experts from industry and academia share their results, experiences, and insights on the future of hot topic areas such as cloud and datacenter networking, software-defined networks, photonic integration and more. More than 12,000 attendees and an exhibit with 550 companies are expected.
The conference features a comprehensive technical program with more than 800 talks covering the latest research related to all aspects of optical communications. Some of the 2013 highlights come from leading companies in the industry including IBM, AT&T and AFL. Selected research presentations taking place at the conference next week and outlined below include:
Ultra-High-Speed Optical Communications Link Sets New Power Efficiency Record
Ultrafast supercomputers that operate at speeds 100 times faster than current systems are now one step closer to reality. A team of IBM researchers working on a U.S. Defense Advanced Research Projects Agency (DARPA)-funded program have found a way to transmit massive amounts of data with unprecedentedly low power consumption.
The team will describe their prototype optical link, which shatters the previous power efficiency record at OFC/NFOEC 2013 next week.
Scientists predict that the supercomputers of the future — so-called “exascale computers” — will enable them to model the global climate, run molecular-level simulations of entire cells, design nanostructures, and more. “We envision machines reaching the exascale mark around 2020, but a great deal of research must be done to make this possible,” says Jonathan E. Proesel, a research staff member at the IBM T.J. Watson Research Center in Yorktown Heights, N.Y. To reach that mark, researchers must develop a way to quickly move massive amounts of data within the supercomputer while keeping power consumption in check.
By combining innovative circuits in IBM’s 32-nanometer silicon-on-insulator complementary metal-oxide-semiconductor (SOI CMOS) technology with advanced vertical cavity surface emitting lasers (VCSELs) and photodetectors fabricated by Sumitomo Electric Device Innovations USA (formerly Emcore), Proesel and his colleagues created a power-efficient optical communication link operating at 25 gigabits per second using just 24 milliwatts of total wall-plug power, or 1 pJ/bit. “Compared to our previous work, we have increased the speed by 66 percent while cutting the power in half,” Proesel says. “We’re continuing the push for lower power and higher speed in optical communications. There will always be demand to move more data with less energy, and that’s what we’re working toward.”
Proesel’s presentation at OFC/NFOEC, titled, “35-Gb/s VCSEL-Based Optical Link using 32-nm SOI CMOS Circuits” will take place Mon., March 18 at 2:00 p.m. in the Anaheim Convention Center.
New Distance Record for 400 Gb/s Data Transmission
As network carriers debate the next Ethernet standard — and whether transmission speeds of 400 gigabit per second or 1 terabit per second should be the norm — engineers are working on new measures to squeeze next-generation performance out of current-generation systems.
To that end, a team from AT&T has devised a new patent pending technique enabling tuning of the modulation spectral efficiency, which allows, for the first time, 400 Gb/s signals to be sent over today’s 100 gigahertz-grid optical networks over ultra-long distances. Spectral efficiency is the information rate that can be transmitted over a given bandwidth, and measures how efficiently the available frequency spectrum is utilized.
The researchers, led by optical transmission system expert Xiang Zhou of AT&T Labs-Research in Middletown, N.J., will describe their work next week at OFC/NFOEC 2013.
In the system, Nyquist-shaped 400Gb/s signals with tunable spectral efficiency were generated using modulated subcarriers. Eight 100 GHz-spaced, 400 Gb/s wavelength-division-multiplexed signals were combined and then transmitted over a re-circulating transmission test platform consisting of 100-km fiber spans.
Using the new modulation technique and a new low-loss, large-effective area fiber from OFS Labs, the team transmitted the signals over a record-breaking 12,000 kilometers (roughly 7500 miles) — surpassing their own previous distance record (using the 50 gigahertz-grid) by more than 9000 km.
“This result not only represents a reach increase by a factor of 2.5 for 100 GHz-spaced 400 G-class WDM systems, it also sets a new record for the product of spectral efficiency and distance,” says Zhou. Compared to modulation techniques currently used, he says, “our method has the unique capability to allow tuning of the modulation spectral efficiency to match the available channel bandwidth and maximize the transmission reach, while maintaining tolerance to fiber nonlinearities and laser phase noise, both of which are major factors limiting performance for high-speed optical systems.”
Zhou’s presentation at OFC/NFOEC, entitled “12,000km Transmission of 100GHz Spaced, 8x495-Gb/s PDM Time-Domain Hybrid QPSK-8QAM Signals,” will take place Tue., March 19 at 3:00 p.m. in the Anaheim Convention Center.
New Automated Process Simplifies Alignment and Splicing of Multicore Optical Fibers
New multicore optical fibers have many times the signal-carrying capacity of traditional single-core fibers, but their use in telecommunications has been severely restricted because of the challenge in splicing them together — picture trying to match up and connect two separate boxes of spaghetti so that all of the noodles in each box are perfectly aligned. Now, a new splicing technique offers an automated way to do just that, with minimal losses in signal quality across the spliced sections. The method will be described next week at OFC/NFOEC 2013.
In the telecommunications industry, engineers maximize signal-carrying capacity using a process called multiplexing, which allows multiple signals or data streams to be combined within a single fiber cable. One digital phone line, for example, uses 64 kilobits per second of bandwidth, but with a technique called time multiplexing, more than 1.5 million phone conversations can take place at the same time, carried by one fiber core. With wavelength multiplexing, that one fiber core can send up to 200 different wavelengths of light simultaneously, increasing the capacity to 10 terabits per second, serving about 200 million phone lines. Those multiplexed fibers, in turn, can be bundled together into a so-called multicore fiber (MCF), consisting of up to 19 cores — and up to 19 times the signal-carrying capacity.
The challenge, however, is splicing those multicores together.
Researchers who work with MCFs in the lab usually have their own preferred manual processes for aligning and splicing fibers, explains Wenxin Zheng, manager of splice engineering at AFL in Duncan, S.C., who developed the new technique. “Although the manual way may be good for a skilled operator in a lab environment for research purposes, automation is the only path that can push MCF to factories and production lines.”
In Zheng’s process, which uses a Fujikura FSM-100P+ fusion splicer, the fibers to be spliced are stripped and loaded into the splicer, then rotated and imaged with two video cameras so that their cores can be roughly aligned using a pattern-matching algorithm. Next, using a power-feedback method and image processing, a pair of corresponding cores in each fiber are finely aligned, as is the cladding around the cores. Finally, the cores are heat-spliced.
“To align the multiple cores simultaneously is a big challenge,” Zheng says. “If two fibers to be spliced have random core locations, there is no way to align the entire core.” However, the component cores of MCFs can be aligned if they are created using the same design standard, and if the cores are distributed symmetrically in the MCF — such as in a seven-core MCF with one central core surrounded by six cores oriented like the spokes of a wagon wheel. In that case, Zheng notes, “we can fine-align one side-core in an MCF and its cladding at the same time. Based on the geometric specifications of the fiber, the rest of the cores will be automatically aligned.”
Zheng’s presentation, “Automated Alignment and Splicing for Multicore Fibers,” will take place at 5:00 p.m. Mon., March 18 at the Anaheim Convention Center.
For more than 35 years, the Optical Fiber Communication Conference and Exposition/National Fiber Optic Engineers Conference (OFC/NFOEC) has been the premier destination for converging breakthrough research and innovation in telecommunications, optical networking, fiber optics and, recently, datacom and computing. Consistently ranked in the top 200 tradeshows in the United States, and named one of the Fastest Growing Trade Shows in 2012 by TSNN, OFC/NFOEC unites service providers, systems companies, enterprise customers, IT businesses, and component manufacturers, with researchers, engineers, and development teams from around the world. OFC/NFOEC includes dynamic business programming, an exposition of more than 550 companies, and cutting-edge peer-reviewed research that, combined, showcase the trends and pulse of the entire optical communications industry.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.