Visit additional Tabor Communication Publications
December 01, 2006
A team of experts from the University of Illinois at Chicago's National Center for Data Mining (NCDM), Northwestern University and Johns Hopkins University won the 7th annual Bandwidth Challenge held November 16th in Tampa, FL at SC06, the international conference for high performance computing, networking and storage.
They transported the 1.3 TB Sloan Digital Sky Survey Data (SDSS) from the University of Illinois at Chicago to the SC06 floor at Tampa with a sustained data transfer rate of 8 Gb/s over a 10 Gb/s link, and a peak rate of 9.18 Gb/s.
This was a major new milestone that demonstrated that it is now practical for working scientists to transfer large data sets from disk to disk over long distances from 10 Gb/s network.
Until recently, the easiest way to transport data sets of this size was by using Federal Express, but today's high speed networks and emerging network protocols can now be used to move these massive data sets efficiently.
"Not too long ago it took days to move around such terabyte datasets. Moving data at such speeds opens up whole new ways of approaching scientific problems. Our collaboration has been a wonderful example of how computer scientists, network experts and astronomers work together to solve real-life problems that will impact our whole discipline," says Alexander Szalay, Alumni Centennial Professor of Physics and Astronomy at the Johns Hopkins University.
The data set was the BESTDR5 catalog data set from the Sloan Digital Sky Survey and when compressed consisted of 60 files of about 23 GB each and totaling 1.3 TB.
The technology that made this possible was an open source high performance network transport protocol called UDT that the NCDM developed several years ago. Since then it has been downloaded over 8000 times and is being deployed in a variety of research and business settings.
The technology that made this easy was an open source peer-to-peer storage system called SECTOR that NCDM recently developed. SECTOR is built using UDT and is designed to distribute large e-science data sets such as the Sloan Digital Sky Survey.
"Winning this year's Bandwidth Challenge graphically demonstrates that it is now practical for the working scientist to access terabyte size data sets from anywhere in the world. All it takes are modern high performance networks and new network protocols, such as UDT," said Robert Grossman, Director of the National Center for Data Mining at the University of Illinois at Chicago and Managing Partner of Open Data Group.
The network that made this feasible was a 10 Gb/s network provided by the National Lambda Rail (NLR) called PacketNet.
"By using the National LambdaRail and its member regional optical networks, scientists can access terabyte size data sets in minutes instead of days. This is a great example of what you can do with member-owned infrastructure. We are just beginning to see the implications of this," said Tom West, NLR's president and CEO.
In the past, UDT and other technologies could move data at high speeds but faced challenges when used to move data from disk to disk over long distances (additional protocols and services are required when moving data disk-to-disk versus memory-to-memory). By using SECTOR, it is now possible to transport large data sets from disk-to-disk just as easily as transporting large data sets from memory-to-memory.
"This demonstration showcases new techniques for data analysis by closely integrating application processes with leading edge advanced communication technologies. This innovation is significant because it results in both high performance data transport and in high quality analytic results," says Joe Mambretti, Director of the International Center for Advanced Internet Research at Northwestern University.
The winning team consisted of Yunhong Gu, Robert Grossman, Michal Sabala, David Hanley and Shirley Connelly from the National Center for Data Mining at the University of Illinois at Chicago; Alex Szalay, Ani Thakar, Jan vandenBerg, and Alainna Wonders from John Hopkins University and Joe Mambretti from Nortwestern University.
For the Bandwidth Challenge, Force 10 loaned the NCDM an E600 switch to use on the show floor; Extreme Networks provided an 8810 switch to use in Chicago; and Data Direct Networks provided a S2A9550 RAID controller and 80 disks to use on the show floor in Tampa.
The technology was tested using the Teraflow Network, which is managed by the Consortium for Data Analysis Research (CDAR).
For more details, see the web site: sdss.ncdm.uic.edu.
Source: National Center for Data Mining, University of Illinois at Chicago
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.