Cloud Services Satisfy a Higher Calling

By Tiffany Trader

May 15, 2012

Cloud computing is enabling services at scale for everyone, from scientific organizations and commercial providers to individual consumers. Higher education, in particular, has many collaborative projects that lend themselves to cloud services, however often those services are not tailored to the uniqueness of an academic environment. For example, there are very few businesses that have their research department work with their competitors, whereas in higher education, most research educations occur between institutions. That’s where the Internet2 NET+ project comes in. During their annual member meeting, the networking consortium announced the addition of 16 cloud services to its NET+ program, aimed at reducing the barriers to research. HPC in the Cloud spoke with Shel Waggener, Senior Vice President of Internet2 Net+, and Associate Vice Chancellor & CIO for University of California, Berkeley, to get the full story.

Internet2 logoInternet2 sees itself as a bridge between the academic communities and commercial vendors. “We’re focused on cloud computing enabling scale for a community,” Waggener stated, adding, “The ability to have any researcher, any student, anywhere at any institution and instantly use services together is a very powerful opportunity.”

Internet2 is probably best known for its 100 Gigabit Ethernet, 8.8 aggregate Terabit network that is used by the national science labs and the research institutions that are Internet2 members. This not-for-profit was established for the benefit of research support for higher education in the United States. Their mission since 1996 has been focused on removing the barriers to research, and one of these barriers has been the network since researchers often require a level of network capacity beyond the scope of commercial carriers. With the advance of cloud computing, the same limitation now applies to services that are accessed through the network (i.e., IaaS, PaaS, SaaS, etc.). The expanded NET+ offering allows Internet members to simply add the services they want to their core membership.

In the current model, individual researchers must go through the sometimes complex, costly and time-consuming process of creating a cloud environment on their own. This first step is a very big one. There are contractual terms, payment and billing options and other administrative tasks that must be attended to, then the service has to be set up to enable sharing across multiple team members and multiple organizations. Each of these parties would also need to create accounts and implement security protocols.

From Waggener: “There is a lot of work done every day by researchers around the world that is in essence lost, a one-time effort with no marginal gain, because as soon as they do that work, then they’re focused on their science, and when they’re done, it’s gone. All the work that went into enabling that science has been sunset. Through the NET+ services model, there is more effort at the outset – collaboration isn’t free – but the payoffs are huge.”

With Internet2, there is a master agreement with the provider, and then there’s a campus member agreement that allows users to add subscriptions to these services. All the terms are signed off by all the legal counsel at the member institutions. So as a faculty member, you know exactly what you are going to get.

Internet2 is taking community-developed services, for specific researchers or specific disciplines and moving those into a community cloud architecture. They’re taking their investments in middleware and innovations in federated identity and allowing researchers to use their local institutional credentials and be validated at another institution using InCommon’s identity management services. This makes it possible for a Berkeley student to obtain instant access to the services at Michigan or Harvard, and allows faculty members from different universities to collaborate on data analytics or to share computing resources.

But to make an HPC cloud project truly successful, Waggener believes they need to integrate in the commercial solutions that exist today. “We’re taking advantage of economies of scale here, not trying to replicate Blue Gene,” notes Waggener.

The strategy beyond the latest round of cloud service partnerships is to take offerings that were designed for the commercial sector and help tune them for higher education, while keeping costs down. By its nature, the higher ed space is more difficult to work with than other domains as one institution contains every possible type of engagement. A solution that is perfect for one department may not be ideal for another. Waggener explains that fine-tuning services to meet these unique needs usually creates cost barriers for companies trying to offer services to higher education. The goal of the NET+ program is to eliminate those cost premiums for the commercial providers and in doing so simplify business processes on the academic side, so that both parties can take out the unnecessary costs – administrative, legal, contractual and so on – while enabling the faster adoption of services.

In Waggener’s viewpoint, the biggest challenge to traditional academic computing is that the largest resources are always constrained. They become oversubscribed immediately no matter how large they are and how quickly they are deployed, and this oversubscription creates an underutilization of the resource. Queue management becomes a significant problem, Waggener notes, and you end up with code being deployed that hasn’t been fully optimized for that level of research. Some of the largest big data analysis jobs are left waiting significant blocks of time to achieve their science. The instrument isn’t the challenge, says Waggener, it’s all of the dynamics around tuning the specific experiment or analytic activity of that particular resource.

Now, with the advance of cloud computing, there is an explosion in global capacity, in resources, but researchers are still single threading their applications.

“If you want to use some number of machines simultaneously, the question becomes how do you do that? Do you get an account at Amazon? Do you run it through your credit card? What if you want to share all that information and results with someone else? You basically have to create a relationship between the individual researchers and Amazon, that’s a costly and time-intensive task,” comments Waggener.

“The Amazon setup has historically been for small-to-medium businesses, but that’s not how researchers work. The right approach isn’t to get in the way of researchers who want to immediately access those resources, but in fact to have those brokerages done in advance so that the contracts are already in place, so they can log in using their institutional credentials and pick a resource availability from Dell, or from IBM, or from Amazon, in a brokered fashion that takes care of all the complexities of higher education. For the first time, we can work with commercial providers who can leverage their R&D cost for commercial purposes and not have to simply work with them to negotiate a discount price off of their commercial rate for education but instead tune the offering and remove many of the costs that drive the expenditures and overhead for the commercial side and the higher ed side.”

The result is custom-tuned services – both in regard to terms and conditions and, in many cases the offering itself – designed to meet the community’s needs.

Meet the Players

Amazon, Dell, HP and Microsoft are all major contributors to the new Internet2 service offering. Through its partnership with CENIC, Internet2 will offer an Amazon brokerage point in the form of a self-service portal. Members will have access to all eligible AWS services at a discounted rate and will benefit from expanded payment options.

HP, for its part, has been white-labeling parts of its HPC cloud environment for some time now through its HP Labs arm. Historically they have done a lot of work with research universities and individual university contracts, Waggener shares. Now with their expanded commercial cloud offering, they are linking those offerings together. Several Internet2 member organizations have been working with HP over the last 18 months on different use cases. They’ve been utilizing environments that HP built for SHI for very high-end, high-availability commercial customers. These offer some sophisticated migration tools and provisioning tools that make for some very interesting use cases. The use case that was unveiled at the Internet2 member meeting is for high-end, high-availability application environment hosting, allowing institutions to move sensitive data with strong compliance requirements, as in the case of genomics workloads or real-time medical imaging.

“HP is doing a lot of custom work with us, while Dell is engaged with us in a similar process but taking some of its commercial offerings to leverage across, and the Microsoft Azure cloud is going to be available to all Internet2 member institutions for two different processes,” says Waggener. First, they’ll be collaborating with Microsoft Research on a big data research initiative and second, the Azure cloud will be available to Internet2 members for any purpose with no networking charges. Internet2 will have multiple 10Gig connections directly into the Azure cloud, so researchers can spin up an Azure instance without worrying about the data transfer rates in and out (since Internet2 Net+ will own the networking). This will be a huge boon to researchers who don’t really have predictable data transfer needs, adds Waggener.

Dell likes to say that it’s involved in the project from the “desktop to the datacenter” with three initial offerings: Virtual Desktop-as-a-Service, vCloud Datacenter Service and the Dell Virtual Private Cloud (VPDC) Services. Dell is also collaborating with the Internet2 community to advance innovation around big data and research storage services.

Compliance and security are top priorities for the project and the vendors. This sentiment was underscored by Dell’s Director of Global Education, Jon Phillips, who told HPC in the Cloud that when it comes to research and education, the compliance bar is set very high. “Not just any cloud provider can jump into the space and deliver a secure environment that addresses even the minimal base industry standards set forth by HIPAA, FERPA, FISMA, and at the same time be able to do it in a cost-effective and convenient manner,” Phillips stated. Dell’s Plano, Texas, datacenter is not only compliant with those standards, it also connects directly to Internet2. This is part of Dell’s value proposition and something the community has been asking for. Phillips adds that many of the Internet2 member organizations have also been long-time Dell customers.

FISMA, the Federal Information Security Management Act, especially comes into play for researchers that are working with government grant projects that require this level of compliance from a data handling perspective, and the environment that Dell is offering as part of its Plano, Texas, datacenter allows for those controls to be in place.

This project as a whole is promising for its ability to take a lot of the frustration and complexity out of the cloud equation, as Phillips relates: “[Researchers] want to work with providers that can build an agreement structure, and a wrapper that allows for quick provisioning of the solutions and an easy way to procure that is safe-and-sound, vetted by community. That’s the benefit that Internet2’s NET+ program brings to the member community. The solution has been vetted, has the right procurement mechanisms that are important to higher education. A university CIO knows that when they procure something from NET+ program that it’s been through those gated elements.”

While much of the recent news involves commercial-to-community offerings, community peer-to-peer is another important service model. Both approaches require facilitated brokerage to take the cost out and to ensure that the models are in place throughout the community. To that end, Internet2 is working with universities that have HPC resources that want to be able to offer to other institutions as spot cloud instances.

“The worst thing we could do is invest hundreds of millions in capital in any of our institutions and then have them utilized at less than 100 percent,” says Waggener, noting that average utilization for a regular cluster is 20-something percent, while a dedicated high performance cluster is kept occupied about half the time. What if a research team, based in Chicago, needs a staging environment to move to a major national center without having to pay for a capital outlay. If they can spin up an instance of the environment that they need in the Princeton datacenter, then Princeton recovers some orphan costs and Chicago gains immediate access to resources without having to go through a purchasing cycle.

Internet2 will be deploying these services in a phased approach. There’s a service validation phase where multiple institutions commit their time and energy to working with the providers to ensure that the service has been customized to meet the needs of higher education. Then the service gets promoted to an early adopter phase, where it rolls out to a broader set of Internet2 members, the beta customers, who can contribute final adjustments and tuning. At this point, the service will convert to general availability, enabling any Internet2 member to add it to their subscription and start using it.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

New Exascale System for Earth Simulation Introduced

April 23, 2018

After four years of development, the Energy Exascale Earth System Model (E3SM) will be unveiled today and released to the broader scientific community this month. The E3SM project is supported by the Department of Energy Read more…

By Staff

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

HPE Extreme Performance Solutions

Hybrid HPC is Speeding Time to Insight and Revolutionizing Medicine

High performance computing (HPC) is a key driver of success in many verticals today, and health and life science industries are extensively leveraging these capabilities. Read more…

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’s introduction of an ARM-based system (XC-50) last November. Read more…

By John Russell

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Leading Solution Providers

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This