Visit additional Tabor Communication Publications
November 22, 2011
BLOOMINGTON, Ind., Nov. 22 -- An Indiana University team recently set out to address a major concern of data-intensive research: How can we move massive amounts of data to supercomputing facilities for analysis? The team demonstrated data transfer over an experimental 100 Gigabits per second (Gbps) network—taking advantage of a link ten times faster than most in use today.
This experimental network was created to support testing by several universities during the SCinet Research Sandbox (SRS), part of this year's SC11 conference in Seattle, Washington. SRS let researchers assess experimental networking methods in a 100Gbps environment provided by SCinet, ESnet, and Internet2.
The first of its kind production network was equipped with multi-vendor OpenFlow-capable switches. IU's SRS entry, "The Data Superconductor: An HPC cloud using data-intensive scientific applications, Lustre-WAN, and OpenFlow over 100Gb Ethernet," used the Lustre file system and cutting-edge network infrastructure to address challenges created by the exponential growth in volume of digital scientific research data.
A complete cluster and file system operated at each end of the 2,300 mile 100Gbps link running between Indianapolis and Seattle. In a series of demonstrations, IU researchers achieved a peak throughput of 96Gbps for network benchmarks, 6.5GBps using IOR (a standard file system benchmark), and 5.2GBps with a mix of eight real world application workflows.
As of press time (Nov. 22), this appears to be the fastest data transfer ever achieved with a 100Gbps network at a distance of thousands of miles.
"100 Gigabit per second networking combined with the capabilities of the Lustre file system could enable dramatic changes in data-intensive computing," said Stephen Simms, manager of the High Performance File Systems group at Indiana University. "Lustre's ability to support distributed applications, and the production availability of 100 gigabit networks connecting research universities in the US, will provide much needed and exciting new avenues to manage, analyze, and wrest knowledge from the digital data now being so rapidly produced."
US scientists need this capability to enhance scientific competitiveness and open new frontiers of digital discovery. The rapid acceleration of data growth presents obstacles for researchers who manage and transfer large data sets and participate in widely distributed collaborations.
IU's Data Superconductor is optimized for file system operations over the wide area network, and includes features for collaborating across administrative domains using multi-site workflows and distributing data from instruments to compute resources.
Since IU's Data Superconductor is a Lustre-based, high performance file system, it requires no special tools or software to transfer data. It also behaves as a standard POSIX-compliant file system, and features cross-domain authorization capabilities developed at Indiana University.
Notes Robert Henschel, manager of the High Performance Applications group, "The beauty of dealing with data distribution at a file system level is its simplicity. With a centralized file system serving thousands of computational resources around the world, user data can be available everywhere, all of the time."
IU also demonstrated how to have high-performance applications dynamically signal resource requests to the network.
"We used the Extensible Session Protocol (XSP) and OpenFlow to dynamically move one application's traffic between Seattle and Indianapolis from a congested path to one with unused capacity, vastly improving performance," said Matt Davy, director of InCNTRE and chief network architect for the IU GlobalNOC. "IU is dedicated to exploring and demonstrating these types of exciting new advancements in high performance networking—the Sandbox challenge was a great opportunity for us to showcase the work we are doing in these areas."
Internet2, a key collaborator on IU's SRS entry, contributed a 100GbE circuit between Indianapolis and Chicago, as well as the optical system that brings that traffic to Seattle at 100Gb. In addition, Brocade contributed MLXe Ethernet routers equipped with 100GbE blades and a 15.36Tbps fabric for increased performance with less infrastructure and operational overhead. The 100GbE blades let IU aggregate multiple ports to create a single logical link for greater bandwidth and reduced management. IBM provided a pair of G8264 switches with OpenFlow firmware.
IBM, Brocade, Ciena, DataDirect Networks, Whamcloud, and TU Dresden provided support for IU's SC11 demonstrations. For more information about SC11, visit: http://sc11.supercomputing.org.
Source: Indiana University
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
The Top 500 list of the world's fastest computers has just been announced. Not surprisingly, since it's been reported on prior to the official announcement, the Chinese Tianhe-2 system tops the list. And that is an understatement. We talk with Jack Dongarra, Horst Simon, Hans Meuer and others from the....
Outside of the main attractions, including the keynote sessions, vendor showdowns, Think Tank panels, BoFs, and tutorial elements, the International Supercomputing Conference has balanced its five-day agenda with some striking panels, discussions and topic areas that are worthy of some attention....
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
Jun 12, 2013 |
At 31 petaflops of sustained LINPACK capacity, the new Chinese Tianhe-2 supercomputer will be the fastest supercomputer in the world when this month's Top 500 list comes out, as we reported previously in HPCwire.
Jun 12, 2013 |
HPC system makers are lining up to announce compatibility with the new fourth generation Intel Core processor, codenamed "Haswell." The new Iris GPUs based on the Haswell architecture are giving Intel new credibility in the graphics processing department.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.