Visit additional Tabor Communication Publications
June 20, 2012
SUNNYVALE, Calif., June 19 -- Infinera announced today the company has made its first shipments of the DTN-X platform to customers for deployment. Infinera continues to undertake multiple trials and has received purchase orders from new and existing customers. To date, Infinera has announced plans to deploy the DTN-X platform with two new customers, Cable&Wireless Worldwide for its Europe Persia Express Gateway (EPEG) and DANTE for its GÉANT European Research and Education Network.
The Infinera DTN-X platform delivers what is believed to be the world’s first 500 gigabit per second (Gb/s) long-haul FlexCoherent super-channels, enabling service providers to deploy massive optical transport while lowering operational costs. The DTN-X platform features 5 Terabits per second (Tb/s) of optical transport network (OTN) switching capacity. Integrated switching enables service providers to build highly efficient networks with switching activated wherever it is needed to improve wavelength fill and decrease the number of wavelengths that must be deployed. This results in networks with a high network Efficiency Quotient and potentially lowers the total cost of ownership (TCO). The DTN-X also features an industry leading GMPLS control plane that makes the platform easy to use and further simplifies operations.
Infinera first began shipping the DTN platform in 2004, revolutionizing the marketplace by offering the only optical networking solution based on 100 Gb/s photonic integrated circuits (PICs). The DTN-X platform builds on this foundation of innovation and features the third generation 500 Gb/s PICs today, delivering to service providers a solution that focuses on simplicity, scalability, efficiency and reliability.
“While our competitors talk about their roadmaps for metro 400 Gb/s super-channels, Infinera is delivering the industry’s first 500 Gb/s long haul FlexCoherent super-channels,” said Dave Welch, co-founder, EVP and Chief Strategy Officer at Infinera. “With DTN-X, we are also bringing the largest capacity OTN switch to market and delivering the industry’s first deployable FlexCoherent capability. Infinera delivers on the capabilities that help service providers scale, simplify and make their networks more efficient to lower overall lifecycle TCO.”
DTN-X Key Facts
· First system in the world to deliver 500 Gb/s long haul FlexCoherent super-channels, upgradeable to 1 Tb/s super-channels in the future.
· Largest capacity OTN switch with current capacity of 5 Tb/s.
· Engineered to enable future upgrades to 10 Tb/s per chassis of OTN switching and 100 Tb/s in a multi-bay configuration.
· First system in the world to deliver deployable FlexCoherent technology, supporting software selectable modulation formats on a single card, reducing operating costs.
· Converges layers of the network, supporting best of breed DWDM transmission, OTN switching, and in the future MPLS switching in a single platform.
· Intelligent GMPLS control plane software is designed to simplify operations and enable global service providers to rapidly deploy network capacity while lowering operational costs.
· The DTN-X is inter-operable with the DTN platform and supports 10 gigabit ethernet (10 GbE), 40 GbE, 100 GbE, 10 Gb/s SONET/SDH/OTN, 40 Gb/s SONET/SDH/OTN, 100 Gb/s OTN, 8/10 Gb/s fibre channel, and multiple bit rate clear-channel interfaces.
· Infinera is introducing a full-rack XTC-10 and a half-rack XTC-4 chassis, both of which are now shipping to customers.
This week at the WDM & Next Generation Optical Networking conference in Monaco, Infinera’s product and technology experts are demonstrating hands on live DTN-X demonstrations on the Infinera Express, located on the conference show floor.
The Infinera product portfolio also includes the DTN platform, powered by 100 Gb/s PICs, supporting both 10 Gb/s and 40 Gb/s channels and designed to scale up to 6.4 Tb/s of transmission capacity per fiber; the Infinera ATN, a scalable metro WDM transport platform; and Infinera Managed Services for global service and support.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.