Visit additional Tabor Communication Publications
November 05, 2012
SALT LAKE CITY, Utah. – November 5, 2012 – This year, SC12 will not only showcase the next generation of high performance computing applications but it will also be home to seven innovative network research projects through a special program called the SCinet Research Sandbox (SRS).
SCinet is the primary high performance network infrastructure built each year for SC exhibitors to highlight their cutting edge computing applications and collaborations. As a key component of SCinet, the SRS is designed to enable researchers to experiment and demonstrate their ideas on innovative network architectures, applications and protocols in the unique live environment of the SCinet network. This year, the SRS will provide researchers with dedicated access to multiple 100 Gigabits per second wide area network links as well as a 10 Gigabit per second (Gbps) OpenFlow network testbed.
“In addition to supporting the extreme demands of the HPC-based demonstrations that have become the trademark of the conference, SCinet also seeks to foster and highlight developments in network research that will be necessary to support the next-generation of science applications,” said Brian Tierney, SRS co-chair for SC12 and head of ESnet’s Advanced Network Technologies Group. “Both 100 Gbps networking and OpenFlow have become some of the most influential networking technologies of this decade. SRS allows the community to showcase innovations on these platforms while in their infancy to demonstrate the impact they may have on the entire HPC community in the future.”
“Openflow has the potential to greatly improve applications that are necessary for advancements in HPC such as GridFTP and others. By allowing these applications to access the network in a highly intelligent and programmable manner, OpenFlow can significantly improve end-to-end network performance especially for bulk data transfers, which will be an increasingly difficult challenge in the era of data intensive science,” said Andrew Lee, SRS co-chair for SC12 and Principal Network Systems Engineer for the Global Research Network Operations Center at Indiana University. “Demonstrations like those being supported by the SCinet Research Sandbox are laying the groundwork for these critical advancements as well as demonstrating to the community - in a tangible way - the possibilities that Openflow provides."
Seven projects have been selected as part of the SRS program, all of which will be showcased as part of the Technical Program and will be demonstrating their research in several exhibit booths during the conference. For detailed information on the projects and their presentations visit: http://sc12.supercomputing.org/content/scinet-research-sandbox
2012 SRS projects include:
Efficient LHC Data Distribution across 100Gbps Networks
The analysis of data leading to the recent discoveries at the Large Hadron Collider produces data flows of more than 100 Petabytes per year, and increasingly relies on the efficient movement of data sets between the globally distributed computing sites.
The team will demonstrate the state-of-the-art data movement tools, as enabling technology for high-throughput data distribution over 100Gbps WAN circuits. The demo will interconnect 3 major LHC Tier-2 computing sites and the SC12 show floor (booth 809) using 100Gbps technology.
Collaborating organizations: University of Victoria, University of Michigan, California Institute of Technology, Vanderbilt University, Internet2, ESnet, CENIC, Starlight, PacWave/GLORIAD, KNU/KISTI as well as vendors: Alcatel Lucent, Ciena, Cisco, Juniper Networks, Mellanox. Dell-Force10, SuperMicro, ASA Micro Systems, Data Direct Network, Fusion-IO, PADTECH
Demonstration booth: 809
Presentation: November 15, 2012, 8:50am MT in Room 155-F2
Exploiting Network Parallelism for Improving Data Transfer Performance
The task of scientific bulk data movement, e.g. migrating collected results from the instrumentation to the processing and storage facilities, is hampered by a lack of available network resources. Traditional R&E connectivity can be congested on portions of an end-to-end path causing degradation of overall performance. This SRS project will explore dynamic network control to facilitate efficient bulk data movement, combining opportunistic use of "traditional" networks with dedicated reservations over virtual circuits and OpenFlow enabled resources. The GridFTP application has been instrumented with the eXtensible Session Protocol (XSP), an intelligent system capable of controlling programmable networks. The project intends to show end-to-end performance improvement between the SC12 conference and campuses involved in the DYNES project, through a combination of regular connectivity, dynamic bandwidth allocations, TCP acceleration, and operations using multiple paths.
Collaborating organizations: Indiana University, Lawrence Berkeley National Laboratory, Argonne National Laboratory and Internet2
Demonstration booths: 1042, 1343
Presentation: Thursday, November 15, 2012, 9:10am MT in Room 155-F2
Multipathing with MPTCP and OpenFlow
This demo shows several emerging network technologies and how these can be used in big data transfers between data centers. In this demonstration traffic is sent simultaneously across multiple OpenFlow controlled paths between Geneva and Salt Lake City. The congestion control mechanism of Multipath TCP (MPTCP) favours the least congested paths and ensures that the load balancing across the paths is always optimal.
Collaborating organizations SURFnet, SARA, iCAIR and California Institute of Technology
Demonstration booths: 2333, 809, 501
Presentation: November 15, 2012, 10:30am MT in Room 155-F2
Next Generation Science DMZ Model with OpenFlow
The emerging era of “Big Science” demands the highest possible network performance. End-to-end circuit automation and workflow-driven customization are two essential capabilities needed for networks to scale to meet this challenge. This demonstration showcases how combining software-defined networking techniques with virtual circuits capabilities can transform the network into a dynamic, customer-configurable virtual switch. In doing so, users are able to rapidly customize network capabilities to meet their unique workflows with little to no configuration effort. The demo also highlights how the network can be automated to support multiple collaborations in parallel.
Collaborating organizations: ESnet, Ciena Corporation
Demonstration booth: 2437
Presentation: November 15, 2012, 11:10am MT in Room 155-F2
OpenFlow Enabled Hadoop over Local and Wide Area Cluster
The Hadoop Distributed File Systems and Hadoop's implementation of MapReduce is one of the most widely used platforms for data intensive computing. The shuffle and sort phases of a MapReduce computation often saturate network links to nodes and the reduce phase of the computation must wait for data. This study explores the use of OpenFlow to the control network configuration for different flows to thereby provide different network characteristics for different categories of Hadoop traffic.
Collaborating organizations: University of Chicago
Demonstration booth: 501
Presentation: November 15, 2012, 10:50am MT in Room 155-F2
OpenFlow Services for Science: An International Experimental Research Network
Large-scale data intensive science requires global collaboration and sophisticated high capacity data management. The emergence of more flexible networking, for example, using techniques based on OpenFlow, provides opportunities to address these issues because these techniques enable a high degree of network customization and dynamic provisioning. These techniques enable large-scale facilities to be created that can be used to prototype new architecture, services, protocols, and technologies. A number of research organizations from several countries have designed and implemented a persistent international experimental research facility that can be used to prototype, investigate, and test network innovations for large-scale global science. For SC12, this international experimental network facility will be extended to from sites across the world to the conference showfloor, and it will be used to support several testbeds and to showcase a series of complementary demonstrations.
Collaborating organizations: International Center for Advanced Internet Research Northwestern University; National Center for High-Performance Computing, Taiwan; University of Applied Sciences, Taiwan; National Cheng-Kung University, Taiwan; SARA, The Netherlands, California Institute of Technology/CERN; SURFnet, The Netherlands.
Demonstration booths: 2333, 501, 843, 809
Presentation: November 15, 2012, 8:30am MT in Room 155-F2
Scalable Cyber-Security for Terabit Cloud Computing
Reservoir Labs will demonstrate R-Scope, a scalable, high-performance network packet inspection technology that forms the core of a new generation of Intrusion Detection Systems enabling the construction and deployment of cyber security infrastructures scaling to terabit per second ingest bandwidths. This scalability is enabled by the use of low- power and high-performance manycore network processors combined with Reservoir’s enhancements to Bro. The innovative R-Scope PACE-T appliance, implemented on a 1U Tilera TILExtreme-Gx platform, will demonstrate the capacity to perform cyber-security analysis at 80Gbps, by combining cyber-security aware front-end network traffic load balancing tightly coupled with the full back-end analytic power of Bro. This fully-programmable platform incorporates the full Bro semantics into the appliance’s load-balancing front-end and the back-end analytic nodes.
Collaborating organizations: Reservoir Labs, SCinet Security Team
Presentation: November 15, 2012, 9:30am MT in Room 155-F2
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.