Microsoft Spins Cycle Computing into Core Azure Product

By John Russell

December 5, 2017

Last August, cloud giant Microsoft acquired HPC cloud orchestration pioneer Cycle Computing. Since then the focus has been on integrating Cycle’s organization, mapping out its new role as a core Microsoft Azure product, and deciding what to do with those Cycle customers who currently use non-Azure cloud providers. At SC17, HPCwire caught up with Brett Tanzer, head of Microsoft Azure Specialized Compute Group (ASCG, which used to be Big Compute) in which Cycle now lives, and Tim Carroll, formerly Cycle VP of sales and ecosystem development and now a ‘principal’ in ASCG, for a snapshot of emerging plans for Cycle.

Much has already been accomplished they emphasize – for starters “the Cycle organization has settled in” and most are relocating to Seattle. Much also remains to be done – it will probably be a year or so before Cycle is deeply integrated across Azure’s extensive capabilities. In some ways, it’s best not to think of the Cycle acquisition in isolation but as part of Microsoft’s aggressively evolving strategy to make Azure all things for all users and that includes the HPC community writ large. Cycle is just one of the latest, and a significant, piece of the puzzle.

Founded in 2005 by Jason Stowe, Rachel Christensen, Rob Futrick and Doug Clayton, Cycle Computing was one of the first companies to target HPC orchestration in the cloud; its software, CycleCloud, enables users to burst and manage HPC workloads (and data) into the cloud. Till now, cloud provider agnosticism has been a key Cycle value proposition. That will change but how quickly is uncertain. Tanzer assures there will be no disruption of existing Cycle customers, but also emphasizes Microsoft intends Cycle to become an Azure-only product over time. Cycle’s CEO Jason Stowe has taken on a new role as Principal Group Program Manager responsible for Hybrid and Cluster Workflow products in the Specialized Compute Group. The financial details of the Cycle acquisition weren’t made public.

Far more than in the past HPC is seen as an important opportunity for the big cloud providers. The eruption of demand for running AI and deep learning workflows has also been a major driving force for cloud providers.

Nvidia V100 GPU

Microsoft, like Google and Amazon (and others), has been investing heavily in advanced scale technology. The immediate goal is to attract HPC and AI/deep learning customers. One indicator is the way they have all been loading up on GPUs. Azure is no exception and offers a growing list of GPU instances (M60, K80, P100, P40, and V100 (announced)); it also offers InfiniBand high speed interconnect. In October, Microsoft extended its high performance gambit further via a partnership with Cray to offer supercomputing in the cloud (see HPCwire article, Cray+Azure: Can Cloud Propel Supercomputing?).

How the latter bet will play out is unclear – Tanzer says, “We are hearing from customers there are some workloads they need to get into the cloud that require a Cray. And Cray itself is a pretty innovative company. We think the partnership has longer legs. Look for more to come.” One wonders what interesting offerings may sprout from that alliance.

For now the plan for Cycle is ever deeper integration with Azure’s many offerings, perhaps eventually including Cray. It’s still early days, of course. Tanzer says, “If Tim looks like he hasn’t slept much for past three months, it’s because he hasn’t.  Strategically, all of these products – Cycle, Azure Batch, HPC pack (cluster tool) – will work together and drive orchestration across all the key workloads.”

“The company is rallying behind the [HPC] category and customers are responding very well,” says Tanzer. “We are investing in all phases of the maturity curve, so if you are somebody who wants a Cray, we now have an answer for you. If you are rewriting your petrochemical workload and want to make it cloud friendly, then Batch is a great solution. We are really just taking care, wherever we can, to take friction out of using the cloud. We looked at Cycle and its fantastic people and knowledge. The relationship with Cycle is very symbiotic. We look at where our customers are and see [that for many], Cycle helps them bootstrap the process.”

It’s not hard to see why Cycle was an attractive target. Cycle brings extensive understanding of HPC workloads, key customer and ISV relationships, and a robust product. Recently it’s been working to build closer relationships with systems builders (e.g. Dell EMC) and HPC ISVs (e.g. ANSYS). From an operations and support perspective, not much has changed for Cycle customers, says Carroll, although he emphasizes having now gained access to Microsoft’s deep bench of resources. No decision has been made on name changes and Tanzer says, “Cycle is actually a pretty good name.”

Cycle’s new home, the Azure’s Specialized Compute Group seems to be a new organization encompassing what was previously Big Compute. As of this writing, there was still no Specialized Compute Group web page, but from the tone of Tanzer and Carroll it seemed that things could still be in flux. SCG seems to have a fairly broad mission to smooth the path to cloud computing across all segments with so-called “specialized needs” – that, of course, includes HPC but also crosses over into enterprise computing as well. To a significant extent, says Tanzer, it is part of Microsoft’s company-wide mantra to meet-the-customer-where-she/he-is to minimize disruption.

“Quite frankly we are finding customers, even in the HPC space, need a lot of help and it’s also an area where Microsoft has a many differentiated offerings,” Tanzer says. “You should expect us to integrate Cycle’s capabilities more natively into Azure. There is much more that can be done in the category to help customers take advantage of the cloud, from providing best practices about how your workloads move, through governance, and more. Cloud consumption is getting more sophisticated and it’s going require tools to help users maximize their efforts even though the usage models will be very different.”

One can imagine many expanded uses for Cycle functionality, not least close integration with an HPC applications and closer collaboration with ISVs to drive adoption. Microsoft has the clout and understanding of both software and infrastructure businesses to help drive that, says Carroll. “Those two things are important because this is a space that’s always struggled to figure out how to build partnerships between the infrastructure providers and software providers; Microsoft’s ability to talk to some of the significant and important ISVs and figure out ways to work with them from a Microsoft perspective is a huge benefit.”

It probably bears repeating that Tanzer’s expectations seem much broader than HPC or Cycle’s role as an enabler. He says rather matter of factly, “Customers are recognizing the cloud is the destination and thinking in more detail about that. It will be interesting to see how that plays out.” When he says customers, one gets the sense he is talking about more than just a narrow slice of the pie.

The conversation over how best to migrate and perform HPC has a long history. Today, there seems less debate about whether it can be done effectively but more around how to do it right, how much it costs, and what types of HPC jobs are best suited for being run in the cloud. Carroll has for some time argued that technology is not the issue for the most potential HPC cloud users.

Tim Carroll

“It’s less about whether somebody is technically ready than whether they have a business model that requires them to be able to move faster and leverage more compute than they had thought they were going to need,” says Carroll. “Where we see the most growth is [among users] who have deadlines and at the end of the day what they really care about is how long will it take me to get my answer and tell me the cost and complexity to get there. That’s a different conversation than we have had in this particular segment over time.”

Some customer retraining and attitude change will be necessary, says Tanzer.

“They are going to have hybrid environments for a while so to the degree we can help them reduce some of the chaos that comes from that and help retrain the workforce easily on what it needs to take advantage of the cloud. We think that’s important. Workforces who run the workloads really understand all they want to do is to take advantage of the technology but some relearning is necessary and that’s another area where Cycle really helps because of its tools and set of APIs and they speak the language of a developer,” he says.

Cycle connections in the academic space will also be beneficial according to Tanzer. There are both structural and financial obstacles for academicians who wish to run HPC workloads in commercial cloud and Cycle insight will help Azure navigate that landscape to the benefit of Azure and academic users, he suggests. The Cray deal will help in government markets, he says.

Stay tuned.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

How the United States Invests in Supercomputing

November 14, 2018

The CORAL supercomputers Summit and Sierra are now the world's fastest computers and are already contributing to science with early applications. Ahead of SC18, Maciej Chojnowski with ICM at the University of Warsaw discussed the details of the CORAL project with Dr. Dimitri Kusnezov from the U.S. Department of Energy. Read more…

By Maciej Chojnowski

At SC18: Humanitarianism Amid Boom Times for HPC

November 14, 2018

At SC18 in Dallas, the feeling on the ground is one of forward-looking buoyancy. Like boom times that cycle through the Texas oil fields, the HPC industry is enjoying a prosperity seen only every few decades, one driven Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, produ Read more…

By John Russell

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

New Data Management Techniques for Intelligent Simulations

The trend in high performance supercomputer design has evolved – from providing maximum compute capability for complex scalable science applications, to capacity computing utilizing efficient, cost-effective computing power for solving a small number of large problems or a large number of small problems. Read more…

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry’s first plug-and-play, portable parallel file system that delivers up to 75 Gb/s per rack on industry standard hardware combined with “enterprise-grade reliability and manageability.” Read more…

By Doug Black

How the United States Invests in Supercomputing

November 14, 2018

The CORAL supercomputers Summit and Sierra are now the world's fastest computers and are already contributing to science with early applications. Ahead of SC18, Maciej Chojnowski with ICM at the University of Warsaw discussed the details of the CORAL project with Dr. Dimitri Kusnezov from the U.S. Department of Energy. Read more…

By Maciej Chojnowski

At SC18: Humanitarianism Amid Boom Times for HPC

November 14, 2018

At SC18 in Dallas, the feeling on the ground is one of forward-looking buoyancy. Like boom times that cycle through the Texas oil fields, the HPC industry is en Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry’s first plug-and-play, portable parallel file system that delivers up to 75 Gb/s per rack on industry standard hardware combined with “enterprise-grade reliability and manageability.” Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competitio Read more…

By Dan Olds

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

OpenACC Talks Up Summit and Community Momentum at SC18

November 12, 2018

OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC im Read more…

By John Russell

How ASCI Revolutionized the World of High-Performance Computing and Advanced Modeling and Simulation

November 9, 2018

The 1993 Supercomputing Conference was held in Portland, Oregon. That conference and it’s show floor provided a good snapshot of the uncertainty that U.S. supercomputing was facing in the early 1990s. Many of the companies exhibiting that year would soon be gone, either bankrupt or acquired by somebody else. Read more…

By Alex R. Larzelere

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

Leading Solution Providers

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Google Releases Machine Learning “What-If” Analysis Tool

September 12, 2018

Training machine learning models has long been time-consuming process. Yesterday, Google released a “What-If Tool” for probing how data point changes affect a model’s prediction. The new tool is being launched as a new feature of the open source TensorBoard web application... Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This