Visit additional Tabor Communication Publications
October 27, 2006
While many people attending conferences say they go for the networking opportunities, when the gathering is the world's largest conference on high performance computing and networking, connectivity is taken to an entirely different level.
When SC06, the premier international conference of high performance computing, networking, storage and analysis, convenes Nov. 11-17 in the Tampa Convention Center, the center will be one of the best-connected sites on the planet. And, as a service for future conventions, much of the state-of-the-art networking infrastructure installed in the center will remain in place.
Every year, a team of volunteers works for more than a year to design, build and manage the SC conference network known as SCinet. For SC06, the SCinet team will be bringing in ten 10-gigabits-per-second (Gbps) network connections to the convention center. The combined network capability will be about 20,000 times that of the fastest residential Internet service provided by cable TV and telephone companies.
"This is a true team effort, from the 140 volunteers from around the country to the dozens of companies loaning us the necessary equipment to build the network," said Dennis Duke, a professor of physics at Florida State University and SCinet chair for the SC06 conference. "We started working on the network in October 2005 and have been working at it steadily since then."
The first big challenge was getting the network connections to downtown Tampa as the major network links operated by Level3 and Qwest ended about 12 miles from the convention center. Using fiber optic cable provided by Verizon, the SCinet team bridged the network from downtown Tampa to the convention center, where they started on the next challenge.
The Tampa Convention Center's internal network could not support the requirements of SC06, so the SCinet team installed about 64,000 feet -- more than 12 miles -- of fiber optics and copper wire. In addition to providing two network drops for every meeting room, SCinet installed a high bandwidth infrastructure serving all parts of the exhibit areas, where more than 225 industrial and research exhibitors will showcase their latest systems, services and scientific achievements. SCinet is also providing wireless connectivity throughout the convention center.
Then they had one more bridge to cross. A number of the conference activities will be held in the Marriott Waterside Hotel, located 150 yards from the convention center. To provide the same network connectivity available in the convention center, SCinet built a 2.6 gigabit wireless bridge to the hotel using GigaBeam wireless equipment.
"While we rely on a lot of people and companies, I'm really proud of the role Florida LambdaRail has played in getting the National LambdaRail bandwidth from Atlanta to Tampa, which is absolutely essential for us," Duke said.
The Florida LambdaRail LLC (FLR) is leading the SCinet wide area network team in its delivery of over 100 Gbps of wide area network connectivity to the Tampa Convention Center. Ten 10 Gbps circuits will connect attendees and exhibitors via SCinet to key network connecting points in Chicago (Abilene), New York City (Abilene), Washington, D.C. (ESnet), Houston (National LambdaRail-PacketNet1), Atlanta (NLR-PacketNet), Baton Rouge (NLR FrameNet), Jacksonville (NLR Framenet), Miami (AMPath) and Chicago (UltraLight), extending the network reach both nationally and internationally.
FLR wave services are used to transport NLR, Atlantic Wave, Florida's Research and Education Network (FLRNet) and UltraLight to carrier facilities in Tampa. Abilene and ESnet network services are carried to Tampa over Qwest facilities. These 10 lambdas are then transported to the Tampa convention center via DWDM systems from Ciena, Cisco and Nortel managed by the SC06 WAN team. In addition, Qwest and FLR are providing commodity Internet services for all SC06 participants.
The effort drew on the talent of engineers from Level3, Qwest, FLR, University of Florida, University of West Florida, Florida State University, Florida International University, University of South Florida, University of Wisconsin, NLR, CENIC, Abilene, ESnet, Atlantic Wave, UltraLight, Verizon, Cisco, Spirent, Nortel and Ciena.
Once the network is fully operational in November, SC06 attendees will push it to its limits, testing new technologies, flooding it with data and then measuring every aspect of the network's performance. Here are two of the conference network highlights:
At every SC conference since SC2000 in Dallas, teams of scientists and engineers have competed in the Bandwidth Challenge to see who could make the most of the huge bandwidth provided by SCinet. And while no group has achieved the unstated goal of flooding the network to the breaking point, each year has seen creative applications which move record amounts of data across the network.
This year's Bandwidth Challenge shifts its focus from that of "bandwidth heroes" to focus on "Bridging the Hero Gap" -- that is, bridging the gap between what can be achieved by networking heroes and what can be achieved by the average researcher with access to high speed networks. The thinking behind this approach is that while 10 Gbps network links are becoming ever more prevalent; achieving data rates close to 10 Gbps or even 1 Gbps across those high bandwidth networks is still unattainable by most users.
The objective is that the effort of the nine participating teams will not only benefit their home institutions, but will also serve as a model for other institutions to follow. Read more at: http://sc06.supercomputing.org/pdf/SC06-BWC-CFP.pdf.
While SCinet's capabilities may be at the leading edge compared to many networks, SCinet's Xnet (eXtreme networks) pushes the envelope even farther to provide a venue for showcasing emerging, often pre-commercial or pre-competitive developmental networking technologies, protocols, and experimental networking applications. At SC06 these will include:
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.