Visit additional Tabor Communication Publications
July 25, 2011
As we progress further into the era of web-based everything, there can be no denying that the networks supporting this explosion of networked interaction will be due for an innovative revamp.
This is where the Internet2 initiative enters the picture. The Internet2 initiative is a gathering of minds dedicated to advancing networking applications and technologies. The consortium is working with the Energy Sciences Network (ESNet), which provides data connections for universities and institutions, to develop experiments on top of dormant networking resources collectively called “dark fiber.”
While it could be several years before the fruits of their networking research extends to the masses, the teams are working on two prototype networks, including one that promises data transfer rates in the 100 gigabit per second range. To put that in context, Google is one of the companies on the cutting edge of this speedy network system with its announcement of building a 1 gigabit per second network for one of its communities.
As Robert Vietzke, Internet2’s director of network services told Technology Review, “When you want to do something disruptive, when you want to try something really radical, you can’t do that on a network that people are trying to actually use. At the same time, it’s useful to test these ideas on real network infrastructure.”
Vietzke says that in the past this kind of research required network researchers to buy spools of fiber, install them in a lab setting, and try with all their might to create the same conditions a national network would face. Dark fiber eliminates these purchases and difficulties simulating mega-networks by allowing them to use a large scale network with real traffic.
Dark fiber refers to a rather extensive network of fiber that is laying unused, much of which had been purchased following the dot-com bubble burst for next to nothing. Internet2 and ESNet have leased this fiber for the next 20 years to work on their 100 gigabit per second network, which is a separate network that is left dark and open to whatever equipment and protocols researchers want to bring to it.
Full story at Technology Review
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.