IEEE Declares War on Cloud Computing Challenges

By Nicole Hemsoth

April 4, 2011

News emerging from any number of quarters , from vendors to trade associations, about cloud standardization is in no short supply. For the most part, a great deal of the progress taking place has come from isolated pockets with specific goals. Groups that tackle smaller strands of cloud computing do tend to collaborate but in the opinion of the IEEE, there is still a great deal of work to do to bring cloud computing into focus and open it to innovation.

The IEEE, the world’s largest professional association devoted to technological advancement, rallied its troops with a new, broad cloud computing initiative that was released this morning.  This effort has particular focus on lending some much-needed clarity to the complex topic as well as an extensive interoperability angle. The IEEE feels that their size, diversity and membership will drive progress toward a more robust cloud computing ecosystem—and are certainly not thinking small.

Two new standards development projects are at the heart of the announcement. IEEE P2301, which is called the “Draft Guide for Cloud Portability and Interoperability Profiles” and IEEE P2302, termed “Draft Standard for Intercloud Interoperability and Federation” will both work toward minimization of a fragmented, siloed ecosystem according to the Steve Diamond, who serves as chair of the cloud computing initiative.

In advance of this announcement we talked to David Bernstein, IEEE P2301 and IEEE P2302 WG chair, and managing director, Cloud Strategy Partners. He feels that cloud computing is a game-changing shift in computing and feels it is “one of three aspects of the ‘perfect storm’ of technology waves currently sweeping across humanity; the other two being massive deployment of very smart mobile devices and ubiquitous high-speed connectivity.” In the eye of this storm, of course, is the cloud, which will serve as the heart of both movements.

Bernstein understands full well that the size and scope of the project is incredibly dense and multi-faceted. As a former VP in Cisco’s CTO office running the company’s Cloud Lab and previous executive positions at AT&T, Siebel Systems, Pluris, and  InterTrust he also sees the challenge on the vendor side in tying all of the disparate aspects together. Furthermore, Bernstein notes that he has seen the historical processes of IEEE progress during his involvement as a key contributor for OpenSOA, OASIS, SCA, WS-I, JCP/J2EE and IEEE POSIX.

He compares the gravity of the IEEE’s cloud computing goals to the same process behind the construction of the global long distance and mobile phone systems and the public internet. On that level, it’s not hard to see how important the organization feels clouds will be in the future if they will take an effort on such gigantic scales.

A Standard to Procure Against

One of the first items on the IEEE cloud agenda is to clarify exactly what cloud are, how the ecosystem breaks down, and how to view and understand the principles behind decisions about adopting or creating technology.

IEEE P2301 will provide a roadmap for vendors, service providers, governments and others to aid users “in procuring, developing, building and using standards-based cloud computing products and services, enabling better portability, increased commonality and greater interoperability across the industry.”

Bernstein describes P2301 as an umbrella initiative that will form a guide for portability and interoperability via profiles to aid in procurement processes. He noted that within the U.S. government, there are some broad-based cloud computing goals but governments need to be able to procure against standards and as of now, there is too much fragmentation among the various efforts to create such guides.

He makes it clear that this procurement-driven effort is based on the need for a guide but that there is no aim to help users choose among vendors necessarily, but rather to present a guide that sets forth some specifics that allow room for users to decide.

Groups like the Cloud Security Alliance, the Distributed Management Task Force (DMTF), and others have done some good work but what they put out is not cohesive enough to wrap procurement policies around since there isn’t the same version control, voting processes and other approval and refinement practice in place. To put this in light, Bernstein says that despite the solid efforts of a group like the Cloud Security Alliance, a government cannot say “let’s use the xx standard to procure against” which is a problem as the U.S. in particular moves forth on its Cloud First policy.

Bernstein says he remembers the old UNIX days when, like today, there were any number of groups with their missions and profiles and remarks that there are similarities with where we are in the cloud today. He claims that we are at a very natural point in the evolution of where a process like this should be with a great deal of splitting and divergence among groups, vendors and progress on standardization efforts.

In his opinion, the IEEE is really the only solution to bring together the many parties involved with standardization of clouds. This is in part due to the group’s global scope, their many publications and other forums and the volunteer nature that encourages member involvement and input. He says that “The Cloud Security Alliance is an ad-hoc association, the DMTF is a pay to play trade association and so is the Open Grid Forum. Inside we know the guys in all those organizations and there is some coordination” but he claims that of those there is not a top-tier international organization with the power to pull all of these disparate missions together.

An Eye on Interoperability

One of the “big picture” first projects the IEEE will tackle is rather dramatic in scope. The issues of federation, interoperability and portability are at the heart of hundreds of debates, papers and conference but it is a slow road to results—a point that Bernstein agrees with. He feels that even though the path to portability is a long one, the roadmap that the IEEE has followed with any other number of standards will apply here and will be hastened by widespread collaboration and information-sharing, some of which is enabled by the cloud.

IEEE P2302 will set forth the base “topology, protocols, functionality and governance required for reliable cloud-to-cloud interoperability and federation.” The working group behind this hopes to build an “economy of scale among cloud product and service providers that remains transparent to users and applications.” The organization hopes that this will help support the still-maturing cloud ecosystem while also pushing interoperability in the same vein as previous efforts like the SS7/IN for telephone systems did years ago.

“We’ve reached out to a lot of our members and companies and found that there’s a lot of confusion within our constituency about cloud computing, especially for those who are trying to advance the technology, either as service providers, researchers or governments. All stakeholders are having a difficult time sorting out the technologies and how they fit together in addition to just being able to identify the exact standards issues.”

Bernstein claims that while there are a number of organizations tackling specific issues in the broad cloud interoperability space, there have been a couple of items that have been overlooked or not given appropriate weight. These include, as he states, interoperability-related topics. While there are actually a number of different organizations with interoperability at their core, he says that his group seeks to fill gaps in such efforts. Weak areas include a lack of measurements, for instance.

He compares the IEEE approach to interoperability to the way other standards have been pushed through. He says, “Think about it—when you get off a plane somewhere your phone just works. That’s because under the covers year ago we worked to solve exactly that problem—tackling the mobile infrastructure topology to create roaming capabilities. Even with the internet there’s this same thing with DNS and peering with autonomous system numbers and routing protocols.

Bernstein continued to put cloud advancement in context, stating, “All of this took a long time but this is how innovation evolves… We’re at the same place with cloud today; there are walled gardens of great innovation—like then it is still something of closed system because that’s just how things develop.”
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Battle Brews over Trump Intentions for Funding Science

February 27, 2017

The battle over science funding – how much and for what kinds of science – Read more…

By John Russell

Google Gets First Dibs on New Skylake Chips

February 27, 2017

As part of an ongoing effort to differentiate its public cloud services, Google made good this week on its intention to bring custom Xeon Skylake chips from Intel Corp. Read more…

By George Leopold

Thomas Sterling on CREST and Academia’s Role in HPC Research

February 27, 2017

The US advances in high performance computing over many decades have been a product of the combined engagement of research centers in industry, government labs, and academia. Read more…

By Thomas Sterling, Indiana University

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPE Extreme Performance Solutions

Manufacturers Reaping the Benefits of Remote Visualization

Today’s manufacturers are operating in an ever-changing atmosphere, and finding new ways to boost productivity has never been more vital.

This is why manufacturers are ramping up their investments in high performance computing (HPC), a trend which has helped give rise to the “connected factory” and Industrial Internet of Things (IIoT) concepts that are proliferating throughout the industry today. Read more…

Weekly Twitter Roundup (Feb. 23, 2017)

February 23, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPE Server Shows Low Latency on STAC-N1 Test

February 22, 2017

The performance of trade and match servers can be a critical differentiator for financial trading houses. Read more…

By John Russell

HPC Financial Update (Feb. 2017)

February 22, 2017

In this recurring feature, we’ll provide you with financial highlights from companies in the HPC industry. Check back in regularly for an updated list with the most pertinent fiscal information. Read more…

By Thomas Ayres

Rethinking HPC Platforms for ‘Second Gen’ Applications

February 22, 2017

Just what constitutes HPC and how best to support it is a keen topic currently. Read more…

By John Russell

Thomas Sterling on CREST and Academia’s Role in HPC Research

February 27, 2017

The US advances in high performance computing over many decades have been a product of the combined engagement of research centers in industry, government labs, and academia. Read more…

By Thomas Sterling, Indiana University

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

Leading Solution Providers

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This