IEEE Declares War on Cloud Computing Challenges

By Nicole Hemsoth

April 4, 2011

News emerging from any number of quarters , from vendors to trade associations, about cloud standardization is in no short supply. For the most part, a great deal of the progress taking place has come from isolated pockets with specific goals. Groups that tackle smaller strands of cloud computing do tend to collaborate but in the opinion of the IEEE, there is still a great deal of work to do to bring cloud computing into focus and open it to innovation.

The IEEE, the world’s largest professional association devoted to technological advancement, rallied its troops with a new, broad cloud computing initiative that was released this morning.  This effort has particular focus on lending some much-needed clarity to the complex topic as well as an extensive interoperability angle. The IEEE feels that their size, diversity and membership will drive progress toward a more robust cloud computing ecosystem—and are certainly not thinking small.

Two new standards development projects are at the heart of the announcement. IEEE P2301, which is called the “Draft Guide for Cloud Portability and Interoperability Profiles” and IEEE P2302, termed “Draft Standard for Intercloud Interoperability and Federation” will both work toward minimization of a fragmented, siloed ecosystem according to the Steve Diamond, who serves as chair of the cloud computing initiative.

In advance of this announcement we talked to David Bernstein, IEEE P2301 and IEEE P2302 WG chair, and managing director, Cloud Strategy Partners. He feels that cloud computing is a game-changing shift in computing and feels it is “one of three aspects of the ‘perfect storm’ of technology waves currently sweeping across humanity; the other two being massive deployment of very smart mobile devices and ubiquitous high-speed connectivity.” In the eye of this storm, of course, is the cloud, which will serve as the heart of both movements.

Bernstein understands full well that the size and scope of the project is incredibly dense and multi-faceted. As a former VP in Cisco’s CTO office running the company’s Cloud Lab and previous executive positions at AT&T, Siebel Systems, Pluris, and  InterTrust he also sees the challenge on the vendor side in tying all of the disparate aspects together. Furthermore, Bernstein notes that he has seen the historical processes of IEEE progress during his involvement as a key contributor for OpenSOA, OASIS, SCA, WS-I, JCP/J2EE and IEEE POSIX.

He compares the gravity of the IEEE’s cloud computing goals to the same process behind the construction of the global long distance and mobile phone systems and the public internet. On that level, it’s not hard to see how important the organization feels clouds will be in the future if they will take an effort on such gigantic scales.

A Standard to Procure Against

One of the first items on the IEEE cloud agenda is to clarify exactly what cloud are, how the ecosystem breaks down, and how to view and understand the principles behind decisions about adopting or creating technology.

IEEE P2301 will provide a roadmap for vendors, service providers, governments and others to aid users “in procuring, developing, building and using standards-based cloud computing products and services, enabling better portability, increased commonality and greater interoperability across the industry.”

Bernstein describes P2301 as an umbrella initiative that will form a guide for portability and interoperability via profiles to aid in procurement processes. He noted that within the U.S. government, there are some broad-based cloud computing goals but governments need to be able to procure against standards and as of now, there is too much fragmentation among the various efforts to create such guides.

He makes it clear that this procurement-driven effort is based on the need for a guide but that there is no aim to help users choose among vendors necessarily, but rather to present a guide that sets forth some specifics that allow room for users to decide.

Groups like the Cloud Security Alliance, the Distributed Management Task Force (DMTF), and others have done some good work but what they put out is not cohesive enough to wrap procurement policies around since there isn’t the same version control, voting processes and other approval and refinement practice in place. To put this in light, Bernstein says that despite the solid efforts of a group like the Cloud Security Alliance, a government cannot say “let’s use the xx standard to procure against” which is a problem as the U.S. in particular moves forth on its Cloud First policy.

Bernstein says he remembers the old UNIX days when, like today, there were any number of groups with their missions and profiles and remarks that there are similarities with where we are in the cloud today. He claims that we are at a very natural point in the evolution of where a process like this should be with a great deal of splitting and divergence among groups, vendors and progress on standardization efforts.

In his opinion, the IEEE is really the only solution to bring together the many parties involved with standardization of clouds. This is in part due to the group’s global scope, their many publications and other forums and the volunteer nature that encourages member involvement and input. He says that “The Cloud Security Alliance is an ad-hoc association, the DMTF is a pay to play trade association and so is the Open Grid Forum. Inside we know the guys in all those organizations and there is some coordination” but he claims that of those there is not a top-tier international organization with the power to pull all of these disparate missions together.

An Eye on Interoperability

One of the “big picture” first projects the IEEE will tackle is rather dramatic in scope. The issues of federation, interoperability and portability are at the heart of hundreds of debates, papers and conference but it is a slow road to results—a point that Bernstein agrees with. He feels that even though the path to portability is a long one, the roadmap that the IEEE has followed with any other number of standards will apply here and will be hastened by widespread collaboration and information-sharing, some of which is enabled by the cloud.

IEEE P2302 will set forth the base “topology, protocols, functionality and governance required for reliable cloud-to-cloud interoperability and federation.” The working group behind this hopes to build an “economy of scale among cloud product and service providers that remains transparent to users and applications.” The organization hopes that this will help support the still-maturing cloud ecosystem while also pushing interoperability in the same vein as previous efforts like the SS7/IN for telephone systems did years ago.

“We’ve reached out to a lot of our members and companies and found that there’s a lot of confusion within our constituency about cloud computing, especially for those who are trying to advance the technology, either as service providers, researchers or governments. All stakeholders are having a difficult time sorting out the technologies and how they fit together in addition to just being able to identify the exact standards issues.”

Bernstein claims that while there are a number of organizations tackling specific issues in the broad cloud interoperability space, there have been a couple of items that have been overlooked or not given appropriate weight. These include, as he states, interoperability-related topics. While there are actually a number of different organizations with interoperability at their core, he says that his group seeks to fill gaps in such efforts. Weak areas include a lack of measurements, for instance.

He compares the IEEE approach to interoperability to the way other standards have been pushed through. He says, “Think about it—when you get off a plane somewhere your phone just works. That’s because under the covers year ago we worked to solve exactly that problem—tackling the mobile infrastructure topology to create roaming capabilities. Even with the internet there’s this same thing with DNS and peering with autonomous system numbers and routing protocols.

Bernstein continued to put cloud advancement in context, stating, “All of this took a long time but this is how innovation evolves… We’re at the same place with cloud today; there are walled gardens of great innovation—like then it is still something of closed system because that’s just how things develop.”
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Do Cryptocurrencies Have a Part to Play in HPC?

February 22, 2018

It’s easy to be distracted by news from the US, China, and now the EU on the state of various exascale projects, but behind the vinyl-wrapped cabinets and well-groomed sales execs are an army of Excel-wielding PMO and Read more…

By Chris Downing

HOKUSAI’s BigWaterfall Cluster Extends RIKEN’s Supercomputing Performance

February 21, 2018

RIKEN, Japan’s largest comprehensive research institution, recently expanded the capacity and capabilities of its HOKUSAI supercomputer, a key resource managed by the institution’s Advanced Center for Computing and C Read more…

By Ken Strandberg

Neural Networking Shows Promise in Earthquake Monitoring

February 21, 2018

A team of Harvard University and MIT researchers report their new neural networking method for monitoring earthquakes is more accurate and orders of magnitude faster than traditional approaches. Read more…

By John Russell

HPE Extreme Performance Solutions

Experience Memory & Storage Solutions that will Transform Your Data Performance

High performance computing (HPC) has revolutionized the way we harness insight, leading to a dramatic increase in both the size and complexity of HPC systems. Read more…

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HPE to provide the DoD High Performance Computing Modernizatio Read more…

By Tiffany Trader

HOKUSAI’s BigWaterfall Cluster Extends RIKEN’s Supercomputing Performance

February 21, 2018

RIKEN, Japan’s largest comprehensive research institution, recently expanded the capacity and capabilities of its HOKUSAI supercomputer, a key resource manage Read more…

By Ken Strandberg

Neural Networking Shows Promise in Earthquake Monitoring

February 21, 2018

A team of Harvard University and MIT researchers report their new neural networking method for monitoring earthquakes is more accurate and orders of magnitude faster than traditional approaches. Read more…

By John Russell

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penal Read more…

By Pete Beckman

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

The Food Industry’s Next Journey — from Mars to Exascale

February 12, 2018

Global food producer and one of the world's leading chocolate companies Mars Inc. has a unique perspective on the impact that exascale computing will have on the food industry. Read more…

By Scott Gibson, Oak Ridge National Laboratory

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

SC17: Singularity Preps Version 3.0, Nears 1M Containers Served Daily

November 1, 2017

Just a few months ago about half a million jobs were being run daily using Singularity containers, the LBNL-founded container platform intended for HPC. That wa Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This