Cloud Services Satisfy a Higher Calling

By Tiffany Trader

May 15, 2012

Cloud computing is enabling services at scale for everyone, from scientific organizations and commercial providers to individual consumers. Higher education, in particular, has many collaborative projects that lend themselves to cloud services, however often those services are not tailored to the uniqueness of an academic environment. For example, there are very few businesses that have their research department work with their competitors, whereas in higher education, most research educations occur between institutions. That’s where the Internet2 NET+ project comes in. During their annual member meeting, the networking consortium announced the addition of 16 cloud services to its NET+ program, aimed at reducing the barriers to research. HPC in the Cloud spoke with Shel Waggener, Senior Vice President of Internet2 Net+, and Associate Vice Chancellor & CIO for University of California, Berkeley, to get the full story.

Internet2 logoInternet2 sees itself as a bridge between the academic communities and commercial vendors. “We’re focused on cloud computing enabling scale for a community,” Waggener stated, adding, “The ability to have any researcher, any student, anywhere at any institution and instantly use services together is a very powerful opportunity.”

Internet2 is probably best known for its 100 Gigabit Ethernet, 8.8 aggregate Terabit network that is used by the national science labs and the research institutions that are Internet2 members. This not-for-profit was established for the benefit of research support for higher education in the United States. Their mission since 1996 has been focused on removing the barriers to research, and one of these barriers has been the network since researchers often require a level of network capacity beyond the scope of commercial carriers. With the advance of cloud computing, the same limitation now applies to services that are accessed through the network (i.e., IaaS, PaaS, SaaS, etc.). The expanded NET+ offering allows Internet members to simply add the services they want to their core membership.

In the current model, individual researchers must go through the sometimes complex, costly and time-consuming process of creating a cloud environment on their own. This first step is a very big one. There are contractual terms, payment and billing options and other administrative tasks that must be attended to, then the service has to be set up to enable sharing across multiple team members and multiple organizations. Each of these parties would also need to create accounts and implement security protocols.

From Waggener: “There is a lot of work done every day by researchers around the world that is in essence lost, a one-time effort with no marginal gain, because as soon as they do that work, then they’re focused on their science, and when they’re done, it’s gone. All the work that went into enabling that science has been sunset. Through the NET+ services model, there is more effort at the outset – collaboration isn’t free – but the payoffs are huge.”

With Internet2, there is a master agreement with the provider, and then there’s a campus member agreement that allows users to add subscriptions to these services. All the terms are signed off by all the legal counsel at the member institutions. So as a faculty member, you know exactly what you are going to get.

Internet2 is taking community-developed services, for specific researchers or specific disciplines and moving those into a community cloud architecture. They’re taking their investments in middleware and innovations in federated identity and allowing researchers to use their local institutional credentials and be validated at another institution using InCommon’s identity management services. This makes it possible for a Berkeley student to obtain instant access to the services at Michigan or Harvard, and allows faculty members from different universities to collaborate on data analytics or to share computing resources.

But to make an HPC cloud project truly successful, Waggener believes they need to integrate in the commercial solutions that exist today. “We’re taking advantage of economies of scale here, not trying to replicate Blue Gene,” notes Waggener.

The strategy beyond the latest round of cloud service partnerships is to take offerings that were designed for the commercial sector and help tune them for higher education, while keeping costs down. By its nature, the higher ed space is more difficult to work with than other domains as one institution contains every possible type of engagement. A solution that is perfect for one department may not be ideal for another. Waggener explains that fine-tuning services to meet these unique needs usually creates cost barriers for companies trying to offer services to higher education. The goal of the NET+ program is to eliminate those cost premiums for the commercial providers and in doing so simplify business processes on the academic side, so that both parties can take out the unnecessary costs – administrative, legal, contractual and so on – while enabling the faster adoption of services.

In Waggener’s viewpoint, the biggest challenge to traditional academic computing is that the largest resources are always constrained. They become oversubscribed immediately no matter how large they are and how quickly they are deployed, and this oversubscription creates an underutilization of the resource. Queue management becomes a significant problem, Waggener notes, and you end up with code being deployed that hasn’t been fully optimized for that level of research. Some of the largest big data analysis jobs are left waiting significant blocks of time to achieve their science. The instrument isn’t the challenge, says Waggener, it’s all of the dynamics around tuning the specific experiment or analytic activity of that particular resource.

Now, with the advance of cloud computing, there is an explosion in global capacity, in resources, but researchers are still single threading their applications.

“If you want to use some number of machines simultaneously, the question becomes how do you do that? Do you get an account at Amazon? Do you run it through your credit card? What if you want to share all that information and results with someone else? You basically have to create a relationship between the individual researchers and Amazon, that’s a costly and time-intensive task,” comments Waggener.

“The Amazon setup has historically been for small-to-medium businesses, but that’s not how researchers work. The right approach isn’t to get in the way of researchers who want to immediately access those resources, but in fact to have those brokerages done in advance so that the contracts are already in place, so they can log in using their institutional credentials and pick a resource availability from Dell, or from IBM, or from Amazon, in a brokered fashion that takes care of all the complexities of higher education. For the first time, we can work with commercial providers who can leverage their R&D cost for commercial purposes and not have to simply work with them to negotiate a discount price off of their commercial rate for education but instead tune the offering and remove many of the costs that drive the expenditures and overhead for the commercial side and the higher ed side.”

The result is custom-tuned services – both in regard to terms and conditions and, in many cases the offering itself – designed to meet the community’s needs.

Meet the Players

Amazon, Dell, HP and Microsoft are all major contributors to the new Internet2 service offering. Through its partnership with CENIC, Internet2 will offer an Amazon brokerage point in the form of a self-service portal. Members will have access to all eligible AWS services at a discounted rate and will benefit from expanded payment options.

HP, for its part, has been white-labeling parts of its HPC cloud environment for some time now through its HP Labs arm. Historically they have done a lot of work with research universities and individual university contracts, Waggener shares. Now with their expanded commercial cloud offering, they are linking those offerings together. Several Internet2 member organizations have been working with HP over the last 18 months on different use cases. They’ve been utilizing environments that HP built for SHI for very high-end, high-availability commercial customers. These offer some sophisticated migration tools and provisioning tools that make for some very interesting use cases. The use case that was unveiled at the Internet2 member meeting is for high-end, high-availability application environment hosting, allowing institutions to move sensitive data with strong compliance requirements, as in the case of genomics workloads or real-time medical imaging.

“HP is doing a lot of custom work with us, while Dell is engaged with us in a similar process but taking some of its commercial offerings to leverage across, and the Microsoft Azure cloud is going to be available to all Internet2 member institutions for two different processes,” says Waggener. First, they’ll be collaborating with Microsoft Research on a big data research initiative and second, the Azure cloud will be available to Internet2 members for any purpose with no networking charges. Internet2 will have multiple 10Gig connections directly into the Azure cloud, so researchers can spin up an Azure instance without worrying about the data transfer rates in and out (since Internet2 Net+ will own the networking). This will be a huge boon to researchers who don’t really have predictable data transfer needs, adds Waggener.

Dell likes to say that it’s involved in the project from the “desktop to the datacenter” with three initial offerings: Virtual Desktop-as-a-Service, vCloud Datacenter Service and the Dell Virtual Private Cloud (VPDC) Services. Dell is also collaborating with the Internet2 community to advance innovation around big data and research storage services.

Compliance and security are top priorities for the project and the vendors. This sentiment was underscored by Dell’s Director of Global Education, Jon Phillips, who told HPC in the Cloud that when it comes to research and education, the compliance bar is set very high. “Not just any cloud provider can jump into the space and deliver a secure environment that addresses even the minimal base industry standards set forth by HIPAA, FERPA, FISMA, and at the same time be able to do it in a cost-effective and convenient manner,” Phillips stated. Dell’s Plano, Texas, datacenter is not only compliant with those standards, it also connects directly to Internet2. This is part of Dell’s value proposition and something the community has been asking for. Phillips adds that many of the Internet2 member organizations have also been long-time Dell customers.

FISMA, the Federal Information Security Management Act, especially comes into play for researchers that are working with government grant projects that require this level of compliance from a data handling perspective, and the environment that Dell is offering as part of its Plano, Texas, datacenter allows for those controls to be in place.

This project as a whole is promising for its ability to take a lot of the frustration and complexity out of the cloud equation, as Phillips relates: “[Researchers] want to work with providers that can build an agreement structure, and a wrapper that allows for quick provisioning of the solutions and an easy way to procure that is safe-and-sound, vetted by community. That’s the benefit that Internet2’s NET+ program brings to the member community. The solution has been vetted, has the right procurement mechanisms that are important to higher education. A university CIO knows that when they procure something from NET+ program that it’s been through those gated elements.”

While much of the recent news involves commercial-to-community offerings, community peer-to-peer is another important service model. Both approaches require facilitated brokerage to take the cost out and to ensure that the models are in place throughout the community. To that end, Internet2 is working with universities that have HPC resources that want to be able to offer to other institutions as spot cloud instances.

“The worst thing we could do is invest hundreds of millions in capital in any of our institutions and then have them utilized at less than 100 percent,” says Waggener, noting that average utilization for a regular cluster is 20-something percent, while a dedicated high performance cluster is kept occupied about half the time. What if a research team, based in Chicago, needs a staging environment to move to a major national center without having to pay for a capital outlay. If they can spin up an instance of the environment that they need in the Princeton datacenter, then Princeton recovers some orphan costs and Chicago gains immediate access to resources without having to go through a purchasing cycle.

Internet2 will be deploying these services in a phased approach. There’s a service validation phase where multiple institutions commit their time and energy to working with the providers to ensure that the service has been customized to meet the needs of higher education. Then the service gets promoted to an early adopter phase, where it rolls out to a broader set of Internet2 members, the beta customers, who can contribute final adjustments and tuning. At this point, the service will convert to general availability, enabling any Internet2 member to add it to their subscription and start using it.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

CMU’s Latest “Card Shark” – Libratus – is Beating the Poker Pros (Again)

January 20, 2017

It’s starting to look like Carnegie Mellon University has a gambling problem – can’t stay away from the poker table. Read more…

By John Russell

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Weekly Twitter Roundup (Jan. 19, 2017)

January 19, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

France’s CEA and Japan’s RIKEN to Partner on ARM and Exascale

January 19, 2017

France’s CEA and Japan’s RIKEN institute announced a multi-faceted five-year collaboration to advance HPC generally and prepare for exascale computing. Among the particulars are efforts to: build out the ARM ecosystem; work on code development and code sharing on the existing and future platforms; share expertise in specific application areas (material and seismic sciences for example); improve techniques for using numerical simulation with big data; and expand HPC workforce training. It seems to be a very full agenda. Read more…

By Nishi Katsuya and John Russell

HPE Extreme Performance Solutions

Remote Visualization: An Integral Technology for Upstream Oil & Gas

As the exploration and production (E&P) of natural resources evolves into an even more complex and vital task, visualization technology has become integral for the upstream oil and gas industry. Read more…

ARM Waving: Attention, Deployments, and Development

January 18, 2017

It’s been a heady two weeks for the ARM HPC advocacy camp. At this week’s Mont-Blanc Project meeting held at the Barcelona Supercomputer Center, Cray announced plans to build an ARM-based supercomputer in the U.K. while Mont-Blanc selected Cavium’s ThunderX2 ARM chip for its third phase of development. Last week, France’s CEA and Japan’s Riken announced a deep collaboration aimed largely at fostering the ARM ecosystem. This activity follows a busy 2016 when SoftBank acquired ARM, OpenHPC announced ARM support, ARM released its SVE spec, Fujistu chose ARM for the post K machine, and ARM acquired HPC tool provider Allinea in December. Read more…

By John Russell

Women Coders from Russia, Italy, and Poland Top Study

January 17, 2017

According to a study posted on HackerRank today the best women coders as judged by performance on HackerRank challenges come from Russia, Italy, and Poland. Read more…

By John Russell

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

January 17, 2017

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur. Read more…

By Doug Black

Weekly Twitter Roundup (Jan. 12, 2017)

January 12, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

France’s CEA and Japan’s RIKEN to Partner on ARM and Exascale

January 19, 2017

France’s CEA and Japan’s RIKEN institute announced a multi-faceted five-year collaboration to advance HPC generally and prepare for exascale computing. Among the particulars are efforts to: build out the ARM ecosystem; work on code development and code sharing on the existing and future platforms; share expertise in specific application areas (material and seismic sciences for example); improve techniques for using numerical simulation with big data; and expand HPC workforce training. It seems to be a very full agenda. Read more…

By Nishi Katsuya and John Russell

ARM Waving: Attention, Deployments, and Development

January 18, 2017

It’s been a heady two weeks for the ARM HPC advocacy camp. At this week’s Mont-Blanc Project meeting held at the Barcelona Supercomputer Center, Cray announced plans to build an ARM-based supercomputer in the U.K. while Mont-Blanc selected Cavium’s ThunderX2 ARM chip for its third phase of development. Last week, France’s CEA and Japan’s Riken announced a deep collaboration aimed largely at fostering the ARM ecosystem. This activity follows a busy 2016 when SoftBank acquired ARM, OpenHPC announced ARM support, ARM released its SVE spec, Fujistu chose ARM for the post K machine, and ARM acquired HPC tool provider Allinea in December. Read more…

By John Russell

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

January 17, 2017

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur. Read more…

By Doug Black

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

UberCloud Cites Progress in HPC Cloud Computing

January 10, 2017

200 HPC cloud experiments, 80 case studies, and a ton of hands-on experience gained, that’s the harvest of four years of UberCloud HPC Experiments. Read more…

By Wolfgang Gentzsch and Burak Yenier

A Conversation with Women in HPC Director Toni Collis

January 6, 2017

In this SC16 video interview, HPCwire Managing Editor Tiffany Trader sits down with Toni Collis, the director and founder of the Women in HPC (WHPC) network, to discuss the strides made since the organization’s debut in 2014. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

Leading Solution Providers

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

  • arrow
  • Click Here for More Headlines
  • arrow
Share This