From multicloud to one cloud for every workload

December 4, 2017

First there was the idea of cloud, followed by a tangle of cloud services piled up in every business around the globe. To manage such an eclectic pile of business user favorites, IT began to think of the piecemeal approach to doing business as the multicloud and set about to find ways to cobble it all together and make it work. But the reality is that business users still find themselves searching for an elusive mix of capabilities defined by individual workloads, and the services that make those capabilities more accessible and the work more manageable. The multicloud approach doesn’t solve those problems and so data silos persist, and economies of scale provided by cloud computing are rarely fully realized. To rectify those problems, there needs to be one cloud that can and does accommodate every workload. Now, such a cloud finally exists.

There are two main components in the one cloud strategy: hardware and software. Microsoft has addressed both with a firm commitment to expanding capabilities, improving efficiencies, and making all of it more accessible to more users. By expanding capabilities, Microsoft meant to take services to the cloud that before were impossible to move off the ground, including high-resource workloads such as artificial intelligence and supercomputing. Now those too are in the cloud as a service.

On the hardware side…

Microsoft partnered with Cray, NVIDIA, and Intel to complete the cloud infrastructure needed to meet the needs of any workload, no matter how complex.

The partnership with Cray brought supercomputing to Microsoft Azure datacenters for workloads in HPC, AI, and advanced analytics at scale. This exclusive partnership means organizations no longer need to compromise by either choosing the cloud for the large repositories or on-premises for the dedicated, tightly coupled architecture. Now organizations have both in the cloud with no compromises in capabilities, resources or performance.

New GPU offerings through the partnership with NVIDIA make it possible to train machine learning in a faster, more economical way and do it in the cloud. The NCv2 and ND series will be generally available by the end of 2017 to further those advancements in cloud computing for AI, machine learning and deep learning. The Azure NC-series enables users to run CUDA workloads on up to four Tesla K80 GPUs in a single virtual machine. The NC-series also provides RDMA and InfiniBand connectivity for extremely low-latency. This enables users to scale-up or scale-out any workload.

Linux and Windows virtual machines can be created in seconds for a wide range of computing scenarios using your choice of language, workload and/or operating system.

On the software side…

Microsoft makes it easy to run HPC in the cloud by offering an array of direct access services for end users, including Azure Batch, Batch AI, Batch Rendering, and CycleCloud.

Azure Batch enables batch computing in a cloud environment and handles resource provisioning, which enables the user to focus on their workload and not their infrastructure.  No capital investment is needed to gain access to a tremendous amount of scalable computing power.

Batch AI is a new cloud service that handles deep learning training and testing in parallel at scale. It frees researchers to focus solely on model development. Batch Rendering is Azure’s rendering platform, which also offers pay-per-use licensing for third-party rendering software. Both are domain-specific layers on top of Batch that further simplify access to compute resources for a given domain.

CycleCloud delivers cloud HPC and Big Compute environments to end users, and automates configurations too.

A strong ecosystem to complete the ‘one cloud for every workload’ build…

As any IT staff member knows, a strong ecosystem is essential to bridging and integrating applications, stabilizing and maturing a new technology or configuration, and generally adding to its capabilities and features. To that end, Microsoft has built a broad partner ecosystem around its one cloud concept that includes such heavy-hitters as Rescale, PBScloud, and Teradici.

Rescale on Azure is a high-performance computing (HPC) simulation platform which is fully integrated with all Azure data centers and over 200 simulation software applications.

PBScloud is also a HPC cloud management service with a wide range of tools to control and manage security, governance, costs, and poly-cloud environments.

Teradici provides the means for businesses to easily move Windows or Linux client applications to the public cloud.

Other partners will join the ecosystem over time as new applications such as those surrounding the still maturing AI industry arise and the need to be able to accommodate them and manage everything grows.

The first ‘one cloud’ on the market

Combined – hardware, software, and ecosystem – deliver the first ever ‘one cloud for every workload’ concept. It’s the result of an innovative and aggressive effort to overcome previous cloud limitations and reach high enough performance levels to meet the requirements of complex and large workloads. Until recently, such hasn’t been possible which is why bleeding edge workloads in HPC remained grounded in on-premises data centers and are the last to move to the cloud.

Some companies are using Microsoft cloud to move expenses from capex to opex and otherwise capitalize on economies of scale that cloud environs uniquely offer. End users prefer the ease of access and the one stop, silo busting capabilities rather than the typical span of cloud services.

Others prefer to test and benchmark new HPC and AI hardware and software before investing their capex dollars and associated labor costs. Still others are looking to this new one cloud idea to help them tame the coming avalanche of IoT data and the advanced analytics and data storage needs that are attached.

In any case this is the first cloud concept of its kind to hit the market. An array of exclusive partnerships, such as with Cray Supercomputing, makes it unlikely that a comparable competitor will arise any time soon. Obsolesce is also not likely to be an issue, given the innate nature of cloud computing structures (constant updates and upgrades), and the unique configuration of this specific cloud concept.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

At SC18: Humanitarianism Amid Boom Times for HPC

November 14, 2018

At SC18 in Dallas, the feeling on the ground is one of forward-looking buoyancy. Like boom times that cycle through the Texas oil fields, the HPC industry is enjoying a prosperity seen only every few decades, one driven Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, produ Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry’s first plug-and-play, portable parallel file system that d Read more…

By Doug Black

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

New Data Management Techniques for Intelligent Simulations

The trend in high performance supercomputer design has evolved – from providing maximum compute capability for complex scalable science applications, to capacity computing utilizing efficient, cost-effective computing power for solving a small number of large problems or a large number of small problems. Read more…

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competition. This is the twelfth time that teams of university undergr Read more…

By Dan Olds

At SC18: Humanitarianism Amid Boom Times for HPC

November 14, 2018

At SC18 in Dallas, the feeling on the ground is one of forward-looking buoyancy. Like boom times that cycle through the Texas oil fields, the HPC industry is en Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competitio Read more…

By Dan Olds

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

OpenACC Talks Up Summit and Community Momentum at SC18

November 12, 2018

OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC im Read more…

By John Russell

How ASCI Revolutionized the World of High-Performance Computing and Advanced Modeling and Simulation

November 9, 2018

The 1993 Supercomputing Conference was held in Portland, Oregon. That conference and it’s show floor provided a good snapshot of the uncertainty that U.S. supercomputing was facing in the early 1990s. Many of the companies exhibiting that year would soon be gone, either bankrupt or acquired by somebody else. Read more…

By Alex R. Larzelere

At SC18: GM, Boeing, Deere, BP Talk Enterprise HPC Strategies

November 9, 2018

SC18 in Dallas (Nov.11-16) will feature an impressive series of sessions focused on the enterprise HPC deployments at some of the largest industrial companies: Read more…

By Doug Black

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

Leading Solution Providers

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Google Releases Machine Learning “What-If” Analysis Tool

September 12, 2018

Training machine learning models has long been time-consuming process. Yesterday, Google released a “What-If Tool” for probing how data point changes affect a model’s prediction. The new tool is being launched as a new feature of the open source TensorBoard web application... Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This