Intel Lays Groundwork to Fulfill 2015 Cloud Vision

By Nicole Hemsoth

October 29, 2010

According to IDC forecasts, “by 2015, over 2.5 billion people with more than 10 billion devices will access the Internet,” which means that capacity will be stretched to over twice what it is now. Already, datacenters are experiencing the effects of increased demand, and build-outs of existing datacenters, due to cost and efficiency pressures, are forced to learn quickly how they can somehow manage to become far more efficient while still offering peak performance.

What is needed is an overhaul of current theories about efficient datacenter operation so that flexibility and cloud architectures are given sufficient weight. These are all issues that Intel addressed recently via a string of announcements that were geared toward creating a more open, accessible, flexible and efficient cloud.

This week Intel announced its Cloud 2015 Vision, which sets forth its mission to create a “federated, automated and client-aware” environment that adheres to its three pillars of cloud, including efficiency, simplification and security as well as its goals to “create solutions that are open, multi-vendor and interoperable.” By packaging a small bundle of rhetoric-driven announcements into a hard-to-disagree-with bundle of topics that challenge cloud adoption, Intel took some steps toward making itself heard in the “cloudosphere” on some of the major issues that vendors in niche cloud spaces have often discussed at length.

Key Challenges for the Next Five Years

Intel’s goals over the next five years are based on some inherent challenges that are holding the paradigm shift of cloud at bay. These include:

• Maintaining the stability of mission-critical applications during the cloud migration process.

• Finding ways to negotiate issues related to privacy, security and the protection of intellectual property.

• The automation and flexibility of resources will still be evolving as cloud tools continue to evolve.

• Finding solutions that will meet goals of interoperability and maintain flexibility.

• Making sure that cloud-based applications enable user productivity, no matter what device is being used.

In order to address these challenges, the company has named three pillars in its strategy for the years to come. These elements are defined by the words “federated, automated and client-aware.”

The Federation and the Fleet

In Intel’s view, the concept of a federated cloud refers to an equally vague notion that “communications, data and services can move easily across cloud infrastructures.” In non-marketing speak, that means that interoperability is the prime directive for the federation since datacenters have had difficulty moving data and services across their own borders.

 Intel is calling for “a level of federation that enables the movement of workloads and data from one service provider to another burst implementations between internal private cloud and public cloud providers if additional capacity is needed; and secure and reliable data flow across vendors, partners and clients.” Sounds like a tall order, but if Intel is backing it and they’ve got five years to do something about it, we can hold out hope that this federation goals will go beyond rhetoric.

Today Intel, along with 70 other vendors announced the creation of a coalition to form a system of open standards for the cloud called the Open Data Center Alliance. This fits in with the 2015 vision and according to reports, will represent over $50 billion in annual IT investment. Since Intel’s products are driving the vast majority of the servers operating in the cloud now, they will not be members who have a vote, but instead will serve as technical consultants.

According to Intel’s representative for the Open Data Center Alliance, Billy Cox, the coalition “is a way to create and unify the voice of cloud consumers and cloud users, using usage models as a way to specify requirements. We’ve never seen this approach before.”

Automatic for the People

Automation is another keystone in the three pillars that Intel sees as upholding its Cloud Vision for 2015, which means that provisioning is no longer a crisis situation and is instead handled automatically. Ever since IDC released its 2009 Data Center Survey report suggesting that virtualization thus far has not reduced complexity and that in fact, “the number of server instances that can be managed by the average system administrator has increased from 27 to 41, comparing non-virtualized servers to virtualized servers” we can see how Intel might see this is an issue worth tackling.

Without effective datacenter automation, the benefits of cloud, particularly from a cost standpoint are diminished and furthermore, adding this layer of complexity into an IT organization doesn’t make the cloud a very attractive option. Intel sees it as of critical importance to address issues of automation of provisioning, resource monitoring, reporting for consumption for bill back and workload balancing. Again, a tall order, but one that is being worked out at various other cloud management-focused companies.

Client Awareness and the Lowest Common Denominator

One of the greatest challenges on the horizon for the cloud ecosystem will be the vast number and array of devices. As Intel states, “today there are certain frameworks that allow for some level of datacenter intelligence and scaling to support the client being served; but they are neither consistently applied nor ubiquitous. Many of today’s Internet services default to the lowest common denominator even if the user is accessing the service with a more capable device such as a PC.”

As the amount of data being generated continues to increase and the range of devices continues to expand, Intel suggests that the only solution is for datacenter and service providers to enable secure access and optimized experience regardless of device, for “the cloud to sense and dynamically adjust to take advantage of attributes and capabilities of the client device,” including everything from the battery and connectivity to policies.

How many times have I used the phrase “tall order” and would it violate the rules of writing or be redundant if I said it again? Do I really need to at this point?

Moving Beyond Rhetoric

There are many key words in Intel’s mission statement for its cloud vision that is set to be realized by 2015 and while these are lofty goals — creating an interoperable and open cloud that focuses on efficiency and security — these are the same words echoed by any other number of cloud vendors in the space right now. However, coming close to creating interoperable solutions that provide an easy framework for users is much more complex than it sounds, and it will certainly be 2015 before major progress on the interoperability front (and not just due to Intel) will be made.

Intel thinks of cloud computing as less of a revolution and more as a paradigm shift in IT delivery. As the company noted in its explanation of its vision, the cloud “offers the potential for a transformation in the design, development and deployment of next-generation technologies,” which will “enable flexible, pay-as-you-go business models that will alter the future of computing from mobile platforms and devices to the datacenter.”

Interestingly, during this exact same week, Microsoft launched a full-blown effort to address many of these same issues, particularly as they relate to cross-device efforts to improve IT delivery. Through its “client-plus-cloud” initiative, the company is also seeking to address the many platforms and devices through which clients access and use resources, be those HPC or vanilla machines. Lately, in fact, there has been increasing momentum around the issues presented by mobile applications and their role, not only for mainstream use, but for HPC as well.

Many researchers are finding value in mobile access to their scientific applications and with the cloud, their data can be uploaded instantly to a remote source. This could mean new breakthroughs in research but the cloud and mobile technologies need to be able to work together seamlessly — a fact that both Intel and Microsoft (as well as the majority of other major vendors in the cloud space) are recognizing and addressing.

The Cloud Builders

In addition to its role in the Open Data Center Alliance, Intel also has pledged its commitment to its Cloud Builders program, which allows a number of vendor partners, including IBM, Microsoft and VMware, among others, to provide the solutions that are required according to the needs expressed by the alliance.

“Cloud Builders providers the industry a central point for cloud innovation based on the IT requirements defined by the Open Data Center Alliance and other IT end users.” The program also aims to publish “detailed reference architectures, success stories, and best practices that customer can use now to deploy and enhance the cloud.”

Intel is taking steps toward creating a healthier cloud ecosystem, but true revolutionizing, especially on the interoperability front, is going to take one heck of a lot more than simply having detailed conversations about it. While it’s too early to begin speculating on how the challenges preventing seamless interoperability standards will actually pan out, seeing how the rhetoric spills over to the real world will be interesting to watch.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Scalable Informatics Ceases Operations

March 23, 2017

On the same day we reported on the uncertain future for HPC compiler company PathScale, we are sad to learn that another HPC vendor, Scalable Informatics, is closing its doors. Read more…

By Tiffany Trader

‘Strategies in Biomedical Data Science’ Advances IT-Research Synergies

March 23, 2017

“Strategies in Biomedical Data Science: Driving Force for Innovation” by Jay A. Etchings is both an introductory text and a field guide for anyone working with biomedical data. Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Google Launches New Machine Learning Journal

March 22, 2017

On Monday, Google announced plans to launch a new peer review journal and “ecosystem” Read more…

By John Russell

HPE Extreme Performance Solutions

HFT Firms Turn to Co-Location to Gain Competitive Advantage

High-frequency trading (HFT) is a high-speed, high-stakes world where every millisecond matters. Finding ways to execute trades faster than the competition translates directly to greater revenue for firms, brokerages, and exchanges. Read more…

Swiss Researchers Peer Inside Chips with Improved X-Ray Imaging

March 22, 2017

Peering inside semiconductor chips using x-ray imaging isn’t new, but the technique hasn’t been especially good or easy to accomplish. Read more…

By John Russell

LANL Simulation Shows Massive Black Holes Break ‘Speed Limit’

March 21, 2017

A new computer simulation based on codes developed at Los Alamos National Laboratory (LANL) is shedding light on how supermassive black holes could have formed in the early universe contrary to most prior models which impose a limit on how fast these massive ‘objects’ can form. Read more…

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Intel Ships Drives Based on 3D XPoint Non-volatile Memory

March 20, 2017

Intel Corp. has begun shipping new storage drives based on its 3D XPoint non-volatile memory technology as it targets data-driven workloads. Intel’s new Optane solid-state drives, designated P4800X, seek to combine the attributes of memory and storage in the same device. Read more…

By George Leopold

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

New Japanese Supercomputing Project Targets Exascale

March 14, 2017

Another Japanese supercomputing project was revealed this week, this one from emerging supercomputer maker, ExaScaler Inc., and Keio University. The partners are working on an original supercomputer design with exascale aspirations. Read more…

By Tiffany Trader

Nvidia Debuts HGX-1 for Cloud; Announces Fujitsu AI Deal

March 9, 2017

On Monday Nvidia announced a major deal with Fujitsu to help build an AI supercomputer for RIKEN using 24 DGX-1 servers. Read more…

By John Russell

HPC4Mfg Advances State-of-the-Art for American Manufacturing

March 9, 2017

Last Friday (March 3, 2017), the High Performance Computing for Manufacturing (HPC4Mfg) program held an industry engagement day workshop in San Diego, bringing together members of the US manufacturing community, national laboratories and universities to discuss the role of high-performance computing as an innovation engine for American manufacturing. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Leading Solution Providers

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This