Cloud Services Satisfy a Higher Calling

By Tiffany Trader

May 15, 2012

Cloud computing is enabling services at scale for everyone, from scientific organizations and commercial providers to individual consumers. Higher education, in particular, has many collaborative projects that lend themselves to cloud services, however often those services are not tailored to the uniqueness of an academic environment. For example, there are very few businesses that have their research department work with their competitors, whereas in higher education, most research educations occur between institutions. That’s where the Internet2 NET+ project comes in. During their annual member meeting, the networking consortium announced the addition of 16 cloud services to its NET+ program, aimed at reducing the barriers to research. HPC in the Cloud spoke with Shel Waggener, Senior Vice President of Internet2 Net+, and Associate Vice Chancellor & CIO for University of California, Berkeley, to get the full story.

Internet2 logoInternet2 sees itself as a bridge between the academic communities and commercial vendors. “We’re focused on cloud computing enabling scale for a community,” Waggener stated, adding, “The ability to have any researcher, any student, anywhere at any institution and instantly use services together is a very powerful opportunity.”

Internet2 is probably best known for its 100 Gigabit Ethernet, 8.8 aggregate Terabit network that is used by the national science labs and the research institutions that are Internet2 members. This not-for-profit was established for the benefit of research support for higher education in the United States. Their mission since 1996 has been focused on removing the barriers to research, and one of these barriers has been the network since researchers often require a level of network capacity beyond the scope of commercial carriers. With the advance of cloud computing, the same limitation now applies to services that are accessed through the network (i.e., IaaS, PaaS, SaaS, etc.). The expanded NET+ offering allows Internet members to simply add the services they want to their core membership.

In the current model, individual researchers must go through the sometimes complex, costly and time-consuming process of creating a cloud environment on their own. This first step is a very big one. There are contractual terms, payment and billing options and other administrative tasks that must be attended to, then the service has to be set up to enable sharing across multiple team members and multiple organizations. Each of these parties would also need to create accounts and implement security protocols.

From Waggener: “There is a lot of work done every day by researchers around the world that is in essence lost, a one-time effort with no marginal gain, because as soon as they do that work, then they’re focused on their science, and when they’re done, it’s gone. All the work that went into enabling that science has been sunset. Through the NET+ services model, there is more effort at the outset – collaboration isn’t free – but the payoffs are huge.”

With Internet2, there is a master agreement with the provider, and then there’s a campus member agreement that allows users to add subscriptions to these services. All the terms are signed off by all the legal counsel at the member institutions. So as a faculty member, you know exactly what you are going to get.

Internet2 is taking community-developed services, for specific researchers or specific disciplines and moving those into a community cloud architecture. They’re taking their investments in middleware and innovations in federated identity and allowing researchers to use their local institutional credentials and be validated at another institution using InCommon’s identity management services. This makes it possible for a Berkeley student to obtain instant access to the services at Michigan or Harvard, and allows faculty members from different universities to collaborate on data analytics or to share computing resources.

But to make an HPC cloud project truly successful, Waggener believes they need to integrate in the commercial solutions that exist today. “We’re taking advantage of economies of scale here, not trying to replicate Blue Gene,” notes Waggener.

The strategy beyond the latest round of cloud service partnerships is to take offerings that were designed for the commercial sector and help tune them for higher education, while keeping costs down. By its nature, the higher ed space is more difficult to work with than other domains as one institution contains every possible type of engagement. A solution that is perfect for one department may not be ideal for another. Waggener explains that fine-tuning services to meet these unique needs usually creates cost barriers for companies trying to offer services to higher education. The goal of the NET+ program is to eliminate those cost premiums for the commercial providers and in doing so simplify business processes on the academic side, so that both parties can take out the unnecessary costs – administrative, legal, contractual and so on – while enabling the faster adoption of services.

In Waggener’s viewpoint, the biggest challenge to traditional academic computing is that the largest resources are always constrained. They become oversubscribed immediately no matter how large they are and how quickly they are deployed, and this oversubscription creates an underutilization of the resource. Queue management becomes a significant problem, Waggener notes, and you end up with code being deployed that hasn’t been fully optimized for that level of research. Some of the largest big data analysis jobs are left waiting significant blocks of time to achieve their science. The instrument isn’t the challenge, says Waggener, it’s all of the dynamics around tuning the specific experiment or analytic activity of that particular resource.

Now, with the advance of cloud computing, there is an explosion in global capacity, in resources, but researchers are still single threading their applications.

“If you want to use some number of machines simultaneously, the question becomes how do you do that? Do you get an account at Amazon? Do you run it through your credit card? What if you want to share all that information and results with someone else? You basically have to create a relationship between the individual researchers and Amazon, that’s a costly and time-intensive task,” comments Waggener.

“The Amazon setup has historically been for small-to-medium businesses, but that’s not how researchers work. The right approach isn’t to get in the way of researchers who want to immediately access those resources, but in fact to have those brokerages done in advance so that the contracts are already in place, so they can log in using their institutional credentials and pick a resource availability from Dell, or from IBM, or from Amazon, in a brokered fashion that takes care of all the complexities of higher education. For the first time, we can work with commercial providers who can leverage their R&D cost for commercial purposes and not have to simply work with them to negotiate a discount price off of their commercial rate for education but instead tune the offering and remove many of the costs that drive the expenditures and overhead for the commercial side and the higher ed side.”

The result is custom-tuned services – both in regard to terms and conditions and, in many cases the offering itself – designed to meet the community’s needs.

Meet the Players

Amazon, Dell, HP and Microsoft are all major contributors to the new Internet2 service offering. Through its partnership with CENIC, Internet2 will offer an Amazon brokerage point in the form of a self-service portal. Members will have access to all eligible AWS services at a discounted rate and will benefit from expanded payment options.

HP, for its part, has been white-labeling parts of its HPC cloud environment for some time now through its HP Labs arm. Historically they have done a lot of work with research universities and individual university contracts, Waggener shares. Now with their expanded commercial cloud offering, they are linking those offerings together. Several Internet2 member organizations have been working with HP over the last 18 months on different use cases. They’ve been utilizing environments that HP built for SHI for very high-end, high-availability commercial customers. These offer some sophisticated migration tools and provisioning tools that make for some very interesting use cases. The use case that was unveiled at the Internet2 member meeting is for high-end, high-availability application environment hosting, allowing institutions to move sensitive data with strong compliance requirements, as in the case of genomics workloads or real-time medical imaging.

“HP is doing a lot of custom work with us, while Dell is engaged with us in a similar process but taking some of its commercial offerings to leverage across, and the Microsoft Azure cloud is going to be available to all Internet2 member institutions for two different processes,” says Waggener. First, they’ll be collaborating with Microsoft Research on a big data research initiative and second, the Azure cloud will be available to Internet2 members for any purpose with no networking charges. Internet2 will have multiple 10Gig connections directly into the Azure cloud, so researchers can spin up an Azure instance without worrying about the data transfer rates in and out (since Internet2 Net+ will own the networking). This will be a huge boon to researchers who don’t really have predictable data transfer needs, adds Waggener.

Dell likes to say that it’s involved in the project from the “desktop to the datacenter” with three initial offerings: Virtual Desktop-as-a-Service, vCloud Datacenter Service and the Dell Virtual Private Cloud (VPDC) Services. Dell is also collaborating with the Internet2 community to advance innovation around big data and research storage services.

Compliance and security are top priorities for the project and the vendors. This sentiment was underscored by Dell’s Director of Global Education, Jon Phillips, who told HPC in the Cloud that when it comes to research and education, the compliance bar is set very high. “Not just any cloud provider can jump into the space and deliver a secure environment that addresses even the minimal base industry standards set forth by HIPAA, FERPA, FISMA, and at the same time be able to do it in a cost-effective and convenient manner,” Phillips stated. Dell’s Plano, Texas, datacenter is not only compliant with those standards, it also connects directly to Internet2. This is part of Dell’s value proposition and something the community has been asking for. Phillips adds that many of the Internet2 member organizations have also been long-time Dell customers.

FISMA, the Federal Information Security Management Act, especially comes into play for researchers that are working with government grant projects that require this level of compliance from a data handling perspective, and the environment that Dell is offering as part of its Plano, Texas, datacenter allows for those controls to be in place.

This project as a whole is promising for its ability to take a lot of the frustration and complexity out of the cloud equation, as Phillips relates: “[Researchers] want to work with providers that can build an agreement structure, and a wrapper that allows for quick provisioning of the solutions and an easy way to procure that is safe-and-sound, vetted by community. That’s the benefit that Internet2’s NET+ program brings to the member community. The solution has been vetted, has the right procurement mechanisms that are important to higher education. A university CIO knows that when they procure something from NET+ program that it’s been through those gated elements.”

While much of the recent news involves commercial-to-community offerings, community peer-to-peer is another important service model. Both approaches require facilitated brokerage to take the cost out and to ensure that the models are in place throughout the community. To that end, Internet2 is working with universities that have HPC resources that want to be able to offer to other institutions as spot cloud instances.

“The worst thing we could do is invest hundreds of millions in capital in any of our institutions and then have them utilized at less than 100 percent,” says Waggener, noting that average utilization for a regular cluster is 20-something percent, while a dedicated high performance cluster is kept occupied about half the time. What if a research team, based in Chicago, needs a staging environment to move to a major national center without having to pay for a capital outlay. If they can spin up an instance of the environment that they need in the Princeton datacenter, then Princeton recovers some orphan costs and Chicago gains immediate access to resources without having to go through a purchasing cycle.

Internet2 will be deploying these services in a phased approach. There’s a service validation phase where multiple institutions commit their time and energy to working with the providers to ensure that the service has been customized to meet the needs of higher education. Then the service gets promoted to an early adopter phase, where it rolls out to a broader set of Internet2 members, the beta customers, who can contribute final adjustments and tuning. At this point, the service will convert to general availability, enabling any Internet2 member to add it to their subscription and start using it.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Scalable Informatics Ceases Operations

March 23, 2017

On the same day we reported on the uncertain future for HPC compiler company PathScale, we are sad to learn that another HPC vendor, Scalable Informatics, is closing its doors. Read more…

By Tiffany Trader

‘Strategies in Biomedical Data Science’ Advances IT-Research Synergies

March 23, 2017

“Strategies in Biomedical Data Science: Driving Force for Innovation” by Jay A. Etchings is both an introductory text and a field guide for anyone working with biomedical data. Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Google Launches New Machine Learning Journal

March 22, 2017

On Monday, Google announced plans to launch a new peer review journal and “ecosystem” Read more…

By John Russell

HPE Extreme Performance Solutions

HFT Firms Turn to Co-Location to Gain Competitive Advantage

High-frequency trading (HFT) is a high-speed, high-stakes world where every millisecond matters. Finding ways to execute trades faster than the competition translates directly to greater revenue for firms, brokerages, and exchanges. Read more…

Swiss Researchers Peer Inside Chips with Improved X-Ray Imaging

March 22, 2017

Peering inside semiconductor chips using x-ray imaging isn’t new, but the technique hasn’t been especially good or easy to accomplish. Read more…

By John Russell

LANL Simulation Shows Massive Black Holes Break ‘Speed Limit’

March 21, 2017

A new computer simulation based on codes developed at Los Alamos National Laboratory (LANL) is shedding light on how supermassive black holes could have formed in the early universe contrary to most prior models which impose a limit on how fast these massive ‘objects’ can form. Read more…

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Intel Ships Drives Based on 3D XPoint Non-volatile Memory

March 20, 2017

Intel Corp. has begun shipping new storage drives based on its 3D XPoint non-volatile memory technology as it targets data-driven workloads. Intel’s new Optane solid-state drives, designated P4800X, seek to combine the attributes of memory and storage in the same device. Read more…

By George Leopold

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

New Japanese Supercomputing Project Targets Exascale

March 14, 2017

Another Japanese supercomputing project was revealed this week, this one from emerging supercomputer maker, ExaScaler Inc., and Keio University. The partners are working on an original supercomputer design with exascale aspirations. Read more…

By Tiffany Trader

Nvidia Debuts HGX-1 for Cloud; Announces Fujitsu AI Deal

March 9, 2017

On Monday Nvidia announced a major deal with Fujitsu to help build an AI supercomputer for RIKEN using 24 DGX-1 servers. Read more…

By John Russell

HPC4Mfg Advances State-of-the-Art for American Manufacturing

March 9, 2017

Last Friday (March 3, 2017), the High Performance Computing for Manufacturing (HPC4Mfg) program held an industry engagement day workshop in San Diego, bringing together members of the US manufacturing community, national laboratories and universities to discuss the role of high-performance computing as an innovation engine for American manufacturing. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Leading Solution Providers

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This