Visit additional Tabor Communication Publications
November 12, 2012
SALT LAKE CITY, Nov. 12 – Vitesse Semiconductor Corporation and Avago Technologies announced availability of the industry’s first CXP host joint reference design delivering 100G/120G connectivity for high-speed routers, Enterprise data centers and high-performance cloud computing applications. Based on Vitesse’s VSC7227 12-channel signal conditioner and Avago’s AFBR-83PDZ 100G CXP, this joint reference design enables customers to leverage Vitesse and Avago interoperability and expedite time-to-market with solutions for 100G/120G connectivity. It will be highlighted at SC12 at the Salt Palace Convention Center in Salt Lake City, Utah.
Sustained growth in cloud computing, mobile networking and video, remote storage and other bandwidth-intensive services drives demand for high density 100G/120G connectivity. Recent surveys show that network bandwidth demand is one of the most critical issues facing data centers, driven by increases in virtualization, cloud computing, big data, and convergence. To meet these needs, both carrier and equipment providers are upgrading existing systems with higher density 10G ports which are expected to grow 68% in 2012 alone, according to Infonetics. The reference design supports both these higher density 10G ports, along with the migration to true 100G links and beyond.
“The Vitesse reference design demonstrates the combined superior performance of the Avago CXP transceivers with the Vitesse signal conditioning ICs,” said Sharon Hall, product manager for parallel fiber optic products at Avago. “With this combined solution, system designers can design 100G/120G optically connected systems with confidence and improved signal integrity.”
“This reference design gives our customers a proven way to solve higher bandwidth demands in optical networks,” said Gary Paules, product marketing manager at Vitesse. “Through ecosystem developments such as with Avago, Vitesse continues to extend its connectivity leadership, enabling our customers to rapidly bring to market advanced solutions for high density 100G Ethernet and Infiniband applications.”
Reference Design Details
The reference design is available immediately. Contact your local Vitesse sales office to learn more.
Specific devices on the reference design include:
Avago and Vitesse at Supercomputing (SC12)
The joint reference platform is available for viewing at SC12 in the Avago booth #737. Along with the active hardware demonstration, a complete interoperability report is now available to interested customers.
The International Conference for High Performance Computing, Networking, Storage and Analysis (SC12) takes place November 12-15, 2012 at the Salt Palace Convention Center in Salt Lake City, Utah. Learn more about the conference at sc12.supercomputing.org.
About Avago Technologies
Avago Technologies is a leading supplier of analog interface components for wireless, wireline, and industrial applications. The company provides an extensive range of analog, mixed signal and optoelectronics components and subsystems to approximately 40,000 end customers. Avago has a global employee presence and heritage of technical innovation dating back 50 years to its Hewlett-Packard roots.
Vitesse designs a diverse portfolio of high-performance semiconductor solutions for Carrier and Enterprise networks worldwide. Vitesse products enable the fastest-growing network infrastructure markets including Mobile Access/IP Edge, Cloud Computing and SMB/SME Enterprise Networking.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.