Cloud Services Satisfy a Higher Calling

By Tiffany Trader

May 15, 2012

Cloud computing is enabling services at scale for everyone, from scientific organizations and commercial providers to individual consumers. Higher education, in particular, has many collaborative projects that lend themselves to cloud services, however often those services are not tailored to the uniqueness of an academic environment. For example, there are very few businesses that have their research department work with their competitors, whereas in higher education, most research educations occur between institutions. That’s where the Internet2 NET+ project comes in. During their annual member meeting, the networking consortium announced the addition of 16 cloud services to its NET+ program, aimed at reducing the barriers to research. HPC in the Cloud spoke with Shel Waggener, Senior Vice President of Internet2 Net+, and Associate Vice Chancellor & CIO for University of California, Berkeley, to get the full story.

Internet2 logoInternet2 sees itself as a bridge between the academic communities and commercial vendors. “We’re focused on cloud computing enabling scale for a community,” Waggener stated, adding, “The ability to have any researcher, any student, anywhere at any institution and instantly use services together is a very powerful opportunity.”

Internet2 is probably best known for its 100 Gigabit Ethernet, 8.8 aggregate Terabit network that is used by the national science labs and the research institutions that are Internet2 members. This not-for-profit was established for the benefit of research support for higher education in the United States. Their mission since 1996 has been focused on removing the barriers to research, and one of these barriers has been the network since researchers often require a level of network capacity beyond the scope of commercial carriers. With the advance of cloud computing, the same limitation now applies to services that are accessed through the network (i.e., IaaS, PaaS, SaaS, etc.). The expanded NET+ offering allows Internet members to simply add the services they want to their core membership.

In the current model, individual researchers must go through the sometimes complex, costly and time-consuming process of creating a cloud environment on their own. This first step is a very big one. There are contractual terms, payment and billing options and other administrative tasks that must be attended to, then the service has to be set up to enable sharing across multiple team members and multiple organizations. Each of these parties would also need to create accounts and implement security protocols.

From Waggener: “There is a lot of work done every day by researchers around the world that is in essence lost, a one-time effort with no marginal gain, because as soon as they do that work, then they’re focused on their science, and when they’re done, it’s gone. All the work that went into enabling that science has been sunset. Through the NET+ services model, there is more effort at the outset – collaboration isn’t free – but the payoffs are huge.”

With Internet2, there is a master agreement with the provider, and then there’s a campus member agreement that allows users to add subscriptions to these services. All the terms are signed off by all the legal counsel at the member institutions. So as a faculty member, you know exactly what you are going to get.

Internet2 is taking community-developed services, for specific researchers or specific disciplines and moving those into a community cloud architecture. They’re taking their investments in middleware and innovations in federated identity and allowing researchers to use their local institutional credentials and be validated at another institution using InCommon’s identity management services. This makes it possible for a Berkeley student to obtain instant access to the services at Michigan or Harvard, and allows faculty members from different universities to collaborate on data analytics or to share computing resources.

But to make an HPC cloud project truly successful, Waggener believes they need to integrate in the commercial solutions that exist today. “We’re taking advantage of economies of scale here, not trying to replicate Blue Gene,” notes Waggener.

The strategy beyond the latest round of cloud service partnerships is to take offerings that were designed for the commercial sector and help tune them for higher education, while keeping costs down. By its nature, the higher ed space is more difficult to work with than other domains as one institution contains every possible type of engagement. A solution that is perfect for one department may not be ideal for another. Waggener explains that fine-tuning services to meet these unique needs usually creates cost barriers for companies trying to offer services to higher education. The goal of the NET+ program is to eliminate those cost premiums for the commercial providers and in doing so simplify business processes on the academic side, so that both parties can take out the unnecessary costs – administrative, legal, contractual and so on – while enabling the faster adoption of services.

In Waggener’s viewpoint, the biggest challenge to traditional academic computing is that the largest resources are always constrained. They become oversubscribed immediately no matter how large they are and how quickly they are deployed, and this oversubscription creates an underutilization of the resource. Queue management becomes a significant problem, Waggener notes, and you end up with code being deployed that hasn’t been fully optimized for that level of research. Some of the largest big data analysis jobs are left waiting significant blocks of time to achieve their science. The instrument isn’t the challenge, says Waggener, it’s all of the dynamics around tuning the specific experiment or analytic activity of that particular resource.

Now, with the advance of cloud computing, there is an explosion in global capacity, in resources, but researchers are still single threading their applications.

“If you want to use some number of machines simultaneously, the question becomes how do you do that? Do you get an account at Amazon? Do you run it through your credit card? What if you want to share all that information and results with someone else? You basically have to create a relationship between the individual researchers and Amazon, that’s a costly and time-intensive task,” comments Waggener.

“The Amazon setup has historically been for small-to-medium businesses, but that’s not how researchers work. The right approach isn’t to get in the way of researchers who want to immediately access those resources, but in fact to have those brokerages done in advance so that the contracts are already in place, so they can log in using their institutional credentials and pick a resource availability from Dell, or from IBM, or from Amazon, in a brokered fashion that takes care of all the complexities of higher education. For the first time, we can work with commercial providers who can leverage their R&D cost for commercial purposes and not have to simply work with them to negotiate a discount price off of their commercial rate for education but instead tune the offering and remove many of the costs that drive the expenditures and overhead for the commercial side and the higher ed side.”

The result is custom-tuned services – both in regard to terms and conditions and, in many cases the offering itself – designed to meet the community’s needs.

Meet the Players

Amazon, Dell, HP and Microsoft are all major contributors to the new Internet2 service offering. Through its partnership with CENIC, Internet2 will offer an Amazon brokerage point in the form of a self-service portal. Members will have access to all eligible AWS services at a discounted rate and will benefit from expanded payment options.

HP, for its part, has been white-labeling parts of its HPC cloud environment for some time now through its HP Labs arm. Historically they have done a lot of work with research universities and individual university contracts, Waggener shares. Now with their expanded commercial cloud offering, they are linking those offerings together. Several Internet2 member organizations have been working with HP over the last 18 months on different use cases. They’ve been utilizing environments that HP built for SHI for very high-end, high-availability commercial customers. These offer some sophisticated migration tools and provisioning tools that make for some very interesting use cases. The use case that was unveiled at the Internet2 member meeting is for high-end, high-availability application environment hosting, allowing institutions to move sensitive data with strong compliance requirements, as in the case of genomics workloads or real-time medical imaging.

“HP is doing a lot of custom work with us, while Dell is engaged with us in a similar process but taking some of its commercial offerings to leverage across, and the Microsoft Azure cloud is going to be available to all Internet2 member institutions for two different processes,” says Waggener. First, they’ll be collaborating with Microsoft Research on a big data research initiative and second, the Azure cloud will be available to Internet2 members for any purpose with no networking charges. Internet2 will have multiple 10Gig connections directly into the Azure cloud, so researchers can spin up an Azure instance without worrying about the data transfer rates in and out (since Internet2 Net+ will own the networking). This will be a huge boon to researchers who don’t really have predictable data transfer needs, adds Waggener.

Dell likes to say that it’s involved in the project from the “desktop to the datacenter” with three initial offerings: Virtual Desktop-as-a-Service, vCloud Datacenter Service and the Dell Virtual Private Cloud (VPDC) Services. Dell is also collaborating with the Internet2 community to advance innovation around big data and research storage services.

Compliance and security are top priorities for the project and the vendors. This sentiment was underscored by Dell’s Director of Global Education, Jon Phillips, who told HPC in the Cloud that when it comes to research and education, the compliance bar is set very high. “Not just any cloud provider can jump into the space and deliver a secure environment that addresses even the minimal base industry standards set forth by HIPAA, FERPA, FISMA, and at the same time be able to do it in a cost-effective and convenient manner,” Phillips stated. Dell’s Plano, Texas, datacenter is not only compliant with those standards, it also connects directly to Internet2. This is part of Dell’s value proposition and something the community has been asking for. Phillips adds that many of the Internet2 member organizations have also been long-time Dell customers.

FISMA, the Federal Information Security Management Act, especially comes into play for researchers that are working with government grant projects that require this level of compliance from a data handling perspective, and the environment that Dell is offering as part of its Plano, Texas, datacenter allows for those controls to be in place.

This project as a whole is promising for its ability to take a lot of the frustration and complexity out of the cloud equation, as Phillips relates: “[Researchers] want to work with providers that can build an agreement structure, and a wrapper that allows for quick provisioning of the solutions and an easy way to procure that is safe-and-sound, vetted by community. That’s the benefit that Internet2’s NET+ program brings to the member community. The solution has been vetted, has the right procurement mechanisms that are important to higher education. A university CIO knows that when they procure something from NET+ program that it’s been through those gated elements.”

While much of the recent news involves commercial-to-community offerings, community peer-to-peer is another important service model. Both approaches require facilitated brokerage to take the cost out and to ensure that the models are in place throughout the community. To that end, Internet2 is working with universities that have HPC resources that want to be able to offer to other institutions as spot cloud instances.

“The worst thing we could do is invest hundreds of millions in capital in any of our institutions and then have them utilized at less than 100 percent,” says Waggener, noting that average utilization for a regular cluster is 20-something percent, while a dedicated high performance cluster is kept occupied about half the time. What if a research team, based in Chicago, needs a staging environment to move to a major national center without having to pay for a capital outlay. If they can spin up an instance of the environment that they need in the Princeton datacenter, then Princeton recovers some orphan costs and Chicago gains immediate access to resources without having to go through a purchasing cycle.

Internet2 will be deploying these services in a phased approach. There’s a service validation phase where multiple institutions commit their time and energy to working with the providers to ensure that the service has been customized to meet the needs of higher education. Then the service gets promoted to an early adopter phase, where it rolls out to a broader set of Internet2 members, the beta customers, who can contribute final adjustments and tuning. At this point, the service will convert to general availability, enabling any Internet2 member to add it to their subscription and start using it.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Do Cryptocurrencies Have a Part to Play in HPC?

February 22, 2018

It’s easy to be distracted by news from the US, China, and now the EU on the state of various exascale projects, but behind the vinyl-wrapped cabinets and well-groomed sales execs are an army of Excel-wielding PMO and Read more…

By Chris Downing

HOKUSAI’s BigWaterfall Cluster Extends RIKEN’s Supercomputing Performance

February 21, 2018

RIKEN, Japan’s largest comprehensive research institution, recently expanded the capacity and capabilities of its HOKUSAI supercomputer, a key resource managed by the institution’s Advanced Center for Computing and C Read more…

By Ken Strandberg

Neural Networking Shows Promise in Earthquake Monitoring

February 21, 2018

A team of Harvard University and MIT researchers report their new neural networking method for monitoring earthquakes is more accurate and orders of magnitude faster than traditional approaches. Read more…

By John Russell

HPE Extreme Performance Solutions

Experience Memory & Storage Solutions that will Transform Your Data Performance

High performance computing (HPC) has revolutionized the way we harness insight, leading to a dramatic increase in both the size and complexity of HPC systems. Read more…

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HPE to provide the DoD High Performance Computing Modernizatio Read more…

By Tiffany Trader

HOKUSAI’s BigWaterfall Cluster Extends RIKEN’s Supercomputing Performance

February 21, 2018

RIKEN, Japan’s largest comprehensive research institution, recently expanded the capacity and capabilities of its HOKUSAI supercomputer, a key resource manage Read more…

By Ken Strandberg

Neural Networking Shows Promise in Earthquake Monitoring

February 21, 2018

A team of Harvard University and MIT researchers report their new neural networking method for monitoring earthquakes is more accurate and orders of magnitude faster than traditional approaches. Read more…

By John Russell

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penal Read more…

By Pete Beckman

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

The Food Industry’s Next Journey — from Mars to Exascale

February 12, 2018

Global food producer and one of the world's leading chocolate companies Mars Inc. has a unique perspective on the impact that exascale computing will have on the food industry. Read more…

By Scott Gibson, Oak Ridge National Laboratory

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

SC17: Singularity Preps Version 3.0, Nears 1M Containers Served Daily

November 1, 2017

Just a few months ago about half a million jobs were being run daily using Singularity containers, the LBNL-founded container platform intended for HPC. That wa Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This