Visit additional Tabor Communication Publications
December 01, 2006
A team of experts from the University of Illinois at Chicago's National Center for Data Mining (NCDM), Northwestern University and Johns Hopkins University won the 7th annual Bandwidth Challenge held November 16th in Tampa, FL at SC06, the international conference for high performance computing, networking and storage.
They transported the 1.3 TB Sloan Digital Sky Survey Data (SDSS) from the University of Illinois at Chicago to the SC06 floor at Tampa with a sustained data transfer rate of 8 Gb/s over a 10 Gb/s link, and a peak rate of 9.18 Gb/s.
This was a major new milestone that demonstrated that it is now practical for working scientists to transfer large data sets from disk to disk over long distances from 10 Gb/s network.
Until recently, the easiest way to transport data sets of this size was by using Federal Express, but today's high speed networks and emerging network protocols can now be used to move these massive data sets efficiently.
"Not too long ago it took days to move around such terabyte datasets. Moving data at such speeds opens up whole new ways of approaching scientific problems. Our collaboration has been a wonderful example of how computer scientists, network experts and astronomers work together to solve real-life problems that will impact our whole discipline," says Alexander Szalay, Alumni Centennial Professor of Physics and Astronomy at the Johns Hopkins University.
The data set was the BESTDR5 catalog data set from the Sloan Digital Sky Survey and when compressed consisted of 60 files of about 23 GB each and totaling 1.3 TB.
The technology that made this possible was an open source high performance network transport protocol called UDT that the NCDM developed several years ago. Since then it has been downloaded over 8000 times and is being deployed in a variety of research and business settings.
The technology that made this easy was an open source peer-to-peer storage system called SECTOR that NCDM recently developed. SECTOR is built using UDT and is designed to distribute large e-science data sets such as the Sloan Digital Sky Survey.
"Winning this year's Bandwidth Challenge graphically demonstrates that it is now practical for the working scientist to access terabyte size data sets from anywhere in the world. All it takes are modern high performance networks and new network protocols, such as UDT," said Robert Grossman, Director of the National Center for Data Mining at the University of Illinois at Chicago and Managing Partner of Open Data Group.
The network that made this feasible was a 10 Gb/s network provided by the National Lambda Rail (NLR) called PacketNet.
"By using the National LambdaRail and its member regional optical networks, scientists can access terabyte size data sets in minutes instead of days. This is a great example of what you can do with member-owned infrastructure. We are just beginning to see the implications of this," said Tom West, NLR's president and CEO.
In the past, UDT and other technologies could move data at high speeds but faced challenges when used to move data from disk to disk over long distances (additional protocols and services are required when moving data disk-to-disk versus memory-to-memory). By using SECTOR, it is now possible to transport large data sets from disk-to-disk just as easily as transporting large data sets from memory-to-memory.
"This demonstration showcases new techniques for data analysis by closely integrating application processes with leading edge advanced communication technologies. This innovation is significant because it results in both high performance data transport and in high quality analytic results," says Joe Mambretti, Director of the International Center for Advanced Internet Research at Northwestern University.
The winning team consisted of Yunhong Gu, Robert Grossman, Michal Sabala, David Hanley and Shirley Connelly from the National Center for Data Mining at the University of Illinois at Chicago; Alex Szalay, Ani Thakar, Jan vandenBerg, and Alainna Wonders from John Hopkins University and Joe Mambretti from Nortwestern University.
For the Bandwidth Challenge, Force 10 loaned the NCDM an E600 switch to use on the show floor; Extreme Networks provided an 8810 switch to use in Chicago; and Data Direct Networks provided a S2A9550 RAID controller and 80 disks to use on the show floor in Tampa.
The technology was tested using the Teraflow Network, which is managed by the Consortium for Data Analysis Research (CDAR).
For more details, see the web site: sdss.ncdm.uic.edu.
Source: National Center for Data Mining, University of Illinois at Chicago
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.