IEEE Declares War on Cloud Computing Challenges

By Nicole Hemsoth

April 4, 2011

News emerging from any number of quarters , from vendors to trade associations, about cloud standardization is in no short supply. For the most part, a great deal of the progress taking place has come from isolated pockets with specific goals. Groups that tackle smaller strands of cloud computing do tend to collaborate but in the opinion of the IEEE, there is still a great deal of work to do to bring cloud computing into focus and open it to innovation.

The IEEE, the world’s largest professional association devoted to technological advancement, rallied its troops with a new, broad cloud computing initiative that was released this morning.  This effort has particular focus on lending some much-needed clarity to the complex topic as well as an extensive interoperability angle. The IEEE feels that their size, diversity and membership will drive progress toward a more robust cloud computing ecosystem—and are certainly not thinking small.

Two new standards development projects are at the heart of the announcement. IEEE P2301, which is called the “Draft Guide for Cloud Portability and Interoperability Profiles” and IEEE P2302, termed “Draft Standard for Intercloud Interoperability and Federation” will both work toward minimization of a fragmented, siloed ecosystem according to the Steve Diamond, who serves as chair of the cloud computing initiative.

In advance of this announcement we talked to David Bernstein, IEEE P2301 and IEEE P2302 WG chair, and managing director, Cloud Strategy Partners. He feels that cloud computing is a game-changing shift in computing and feels it is “one of three aspects of the ‘perfect storm’ of technology waves currently sweeping across humanity; the other two being massive deployment of very smart mobile devices and ubiquitous high-speed connectivity.” In the eye of this storm, of course, is the cloud, which will serve as the heart of both movements.

Bernstein understands full well that the size and scope of the project is incredibly dense and multi-faceted. As a former VP in Cisco’s CTO office running the company’s Cloud Lab and previous executive positions at AT&T, Siebel Systems, Pluris, and  InterTrust he also sees the challenge on the vendor side in tying all of the disparate aspects together. Furthermore, Bernstein notes that he has seen the historical processes of IEEE progress during his involvement as a key contributor for OpenSOA, OASIS, SCA, WS-I, JCP/J2EE and IEEE POSIX.

He compares the gravity of the IEEE’s cloud computing goals to the same process behind the construction of the global long distance and mobile phone systems and the public internet. On that level, it’s not hard to see how important the organization feels clouds will be in the future if they will take an effort on such gigantic scales.

A Standard to Procure Against

One of the first items on the IEEE cloud agenda is to clarify exactly what cloud are, how the ecosystem breaks down, and how to view and understand the principles behind decisions about adopting or creating technology.

IEEE P2301 will provide a roadmap for vendors, service providers, governments and others to aid users “in procuring, developing, building and using standards-based cloud computing products and services, enabling better portability, increased commonality and greater interoperability across the industry.”

Bernstein describes P2301 as an umbrella initiative that will form a guide for portability and interoperability via profiles to aid in procurement processes. He noted that within the U.S. government, there are some broad-based cloud computing goals but governments need to be able to procure against standards and as of now, there is too much fragmentation among the various efforts to create such guides.

He makes it clear that this procurement-driven effort is based on the need for a guide but that there is no aim to help users choose among vendors necessarily, but rather to present a guide that sets forth some specifics that allow room for users to decide.

Groups like the Cloud Security Alliance, the Distributed Management Task Force (DMTF), and others have done some good work but what they put out is not cohesive enough to wrap procurement policies around since there isn’t the same version control, voting processes and other approval and refinement practice in place. To put this in light, Bernstein says that despite the solid efforts of a group like the Cloud Security Alliance, a government cannot say “let’s use the xx standard to procure against” which is a problem as the U.S. in particular moves forth on its Cloud First policy.

Bernstein says he remembers the old UNIX days when, like today, there were any number of groups with their missions and profiles and remarks that there are similarities with where we are in the cloud today. He claims that we are at a very natural point in the evolution of where a process like this should be with a great deal of splitting and divergence among groups, vendors and progress on standardization efforts.

In his opinion, the IEEE is really the only solution to bring together the many parties involved with standardization of clouds. This is in part due to the group’s global scope, their many publications and other forums and the volunteer nature that encourages member involvement and input. He says that “The Cloud Security Alliance is an ad-hoc association, the DMTF is a pay to play trade association and so is the Open Grid Forum. Inside we know the guys in all those organizations and there is some coordination” but he claims that of those there is not a top-tier international organization with the power to pull all of these disparate missions together.

An Eye on Interoperability

One of the “big picture” first projects the IEEE will tackle is rather dramatic in scope. The issues of federation, interoperability and portability are at the heart of hundreds of debates, papers and conference but it is a slow road to results—a point that Bernstein agrees with. He feels that even though the path to portability is a long one, the roadmap that the IEEE has followed with any other number of standards will apply here and will be hastened by widespread collaboration and information-sharing, some of which is enabled by the cloud.

IEEE P2302 will set forth the base “topology, protocols, functionality and governance required for reliable cloud-to-cloud interoperability and federation.” The working group behind this hopes to build an “economy of scale among cloud product and service providers that remains transparent to users and applications.” The organization hopes that this will help support the still-maturing cloud ecosystem while also pushing interoperability in the same vein as previous efforts like the SS7/IN for telephone systems did years ago.

“We’ve reached out to a lot of our members and companies and found that there’s a lot of confusion within our constituency about cloud computing, especially for those who are trying to advance the technology, either as service providers, researchers or governments. All stakeholders are having a difficult time sorting out the technologies and how they fit together in addition to just being able to identify the exact standards issues.”

Bernstein claims that while there are a number of organizations tackling specific issues in the broad cloud interoperability space, there have been a couple of items that have been overlooked or not given appropriate weight. These include, as he states, interoperability-related topics. While there are actually a number of different organizations with interoperability at their core, he says that his group seeks to fill gaps in such efforts. Weak areas include a lack of measurements, for instance.

He compares the IEEE approach to interoperability to the way other standards have been pushed through. He says, “Think about it—when you get off a plane somewhere your phone just works. That’s because under the covers year ago we worked to solve exactly that problem—tackling the mobile infrastructure topology to create roaming capabilities. Even with the internet there’s this same thing with DNS and peering with autonomous system numbers and routing protocols.

Bernstein continued to put cloud advancement in context, stating, “All of this took a long time but this is how innovation evolves… We’re at the same place with cloud today; there are walled gardens of great innovation—like then it is still something of closed system because that’s just how things develop.”
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’s introduction of an ARM-based system (XC-50) last November. Read more…

By John Russell

HPE Extreme Performance Solutions

Hybrid HPC is Speeding Time to Insight and Revolutionizing Medicine

High performance computing (HPC) is a key driver of success in many verticals today, and health and life science industries are extensively leveraging these capabilities. Read more…

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Symposium on Computer Architecture (ISCA) in Los Angeles. The Read more…

By Staff

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Leading Solution Providers

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This