ANSYS Adds Cycle Orchestration for Enterprise Cloud HPC

By Doug Black

February 4, 2017

The waiting is the hardest part.

When design engineers need to run complex simulations, too often they find that the HPC resources required for those workloads are already being used. The problem: most on-prem data centers are provisioned for steady-state, not high-demand, needs. When demand increases and HPC resources aren’t available, the engineer puts in a request with the job scheduler. Here, to paraphrase the opening of “Casablanca,” the fortunate ones, through money, or influence, or luck, might obtain access to HPC resources. But the others wait in scheduling limbo. And wait, and wait, and wait….

Now ANSYS, the popular CAE software vendor whose users increasingly turn to HPC for complex simulation workloads, has partnered with Cycle Computing and its CycleCloud software to leverage dynamic cloud capacity and auto-scaling. CycleCloud will provide HPC orchestration for ANSYS’s Enterprise Cloud HPC offering, an engineering simulation platform delivered on Amazon Web Services. CycleCloud enables cloud migration of CAE workloads requiring HPC, including storage and data management and access to resources for interactive and batch execution that scales on demand.

According to ANSYS, more customers are turning to the cloud as the locale for the full simulation and design life cycle.

“We have periods when we have need for many more cores than our data centers can manage,” Judd Kaiser, ANSYS cloud computing program manager, said. “Or we’re moving to increasingly variable workloads and were looking to cloud now as a possible solution. On other end, we have customers who are growing into HPC, who’d like to take advantage of HPC, but building a data center isn’t their core business, so they want to know how they can use cloud to their advantage.”

Cycle addresses both needs, he said.

“We didn’t have much experience in provisioning cloud resources and managing HPC on cloud infrastructures, and that’s what Cycle brought to the table,” he said. “ANSYS Enterprise Cloud, is intended to be a virtual simulation data center, it just happens to be backed on public cloud hardware. It means we can provision for a customer and they can have it up and running next week, serving the needs of dozens of engineers running very large workloads. If that same customer asked us for a recommendation of what we need for a data center, from specs for the system, to ordering the hardware, to rack and stack and installing software and rolling it out to the engineers, that typically takes many months.”

CycleCloud is intended to ensure optimal AWS Spot instance usage and that appropriate resources are used for the right amount of time in the ANSYS Enterprise Cloud. With CycleCloud handing auto-scaling, he explained, “the engineer submits a job to the cluster…and the cluster scales up to meet the demands of the job. So resources are provisioned specifically to serve the needs of that individual job, the job runs almost immediately, and then when it’s complete those resources are decommissioned.”

Keith said there already is some misunderstanding that the combined ANSYS-Cycle offering targets burst-to-the-cloud demand situations.

“It’s more than that,” he said. “People imagine burst capabilities…, it sounds great. They think: ‘I have an on-prem job, I’ll submit it to the cloud and when it’s done I’ll bring it back.’ But therein lies the problem: Bringing it back.”

Not only do ANSYS jobs use a significant amount of compute resources, he said, but once that job is complete the resulting data set can be extremely large. “So if the idea is to bring that data set back on prem and finish the simulation process there…, for most of our software that’s done interactively. You get the data, you load it onto a graphical workstation, you slice and dice it…and extract the useful information. That last part is graphical in nature. So if your vision is to launch to the cloud for the HPC and then bring the results back, you’ve got a data transfer problem. Our results files are routinely huge.”

The answer, he said, it to conduct the entire simulation process in the cloud. “Without moving the data after it’s computed, you spin up a graphical workstation in the cloud and do your post processing with the data in place, still in the cloud. You’re using some sort of thin client locally to interact with the software, but it’s all physically running in the cloud.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

InfiniBand Still Tops in Supercomputing

July 19, 2018

In the competitive global HPC landscape, system and processor vendors, nations and end user sites certainly get a lot of attention--deservedly so--but more than ever, the network plays a crucial role. While fast, perform Read more…

By Tiffany Trader

HPC for Life: Genomics, Brain Research, and Beyond

July 19, 2018

During the past few decades, the life sciences have witnessed one landmark discovery after another with the aid of HPC, paving the way toward a new era of personalized treatments based on an individual’s genetic makeup Read more…

By Warren Froelich

WCRP’s New Strategic Plan for Climate Research Highlights the Importance of HPC

July 19, 2018

As climate modeling increasingly leverages exascale computing and researchers warn of an impending computing gap in climate research, the World Climate Research Programme (WCRP) is developing its new Strategic Plan – and high-performance computing is slated to play a critical role. Read more…

By Oliver Peckham

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Are Your Software Licenses Impeding Your Productivity?

In my previous article, Improving chip yield rates with cognitive manufacturing, I highlighted the costs associated with semiconductor manufacturing, and how cognitive methods can yield benefits in both design and manufacture.  Read more…

U.S. Exascale Computing Project Releases Software Technology Progress Report

July 19, 2018

As is often noted the race to exascale computing isn’t just about hardware. This week the U.S. Exascale Computing Project (ECP) released its latest Software Technology (ST) Capability Assessment Report detailing progress so far. Read more…

By John Russell

InfiniBand Still Tops in Supercomputing

July 19, 2018

In the competitive global HPC landscape, system and processor vendors, nations and end user sites certainly get a lot of attention--deservedly so--but more than Read more…

By Tiffany Trader

HPC for Life: Genomics, Brain Research, and Beyond

July 19, 2018

During the past few decades, the life sciences have witnessed one landmark discovery after another with the aid of HPC, paving the way toward a new era of perso Read more…

By Warren Froelich

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

AI Thought Leaders on Capitol Hill

July 14, 2018

On Thursday, July 12, the House Committee on Science, Space, and Technology heard from four academic and industry leaders – representatives from Berkeley Lab, Argonne Lab, GE Global Research and Carnegie Mellon University – on the opportunities springing from the intersection of machine learning and advanced-scale computing. Read more…

By Tiffany Trader

HPC Serves as a ‘Rosetta Stone’ for the Information Age

July 12, 2018

In an age defined and transformed by its data, several large-scale scientific instruments around the globe might be viewed as a ‘mother lode’ of precious data. With names seemingly created for a ‘techno-speak’ glossary, these interferometers, cyclotrons, sequencers, solenoids, satellite altimeters, and cryo-electron microscopes are churning out data in previously unthinkable and seemingly incomprehensible quantities -- billions, trillions and quadrillions of bits and bytes of electro-magnetic code. Read more…

By Warren Froelich

Tsinghua Powers Through ISC18 Field

July 10, 2018

Tsinghua University topped all other competitors at the ISC18 Student Cluster Competition with an overall score of 88.43 out of 100. This gives Tsinghua their s Read more…

By Dan Olds

HPE, EPFL Launch Blue Brain 5 Supercomputer

July 10, 2018

HPE and the Ecole Polytechnique Federale de Lausannne (EPFL) Blue Brain Project yesterday introduced Blue Brain 5, a new supercomputer built by HPE, which displ Read more…

By John Russell

Pumping New Life into HPC Clusters, the Case for Liquid Cooling

July 10, 2018

High Performance Computing (HPC) faces some daunting challenges in the coming years as traditional, industry-standard systems push the boundaries of data center Read more…

By Scott Tease

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This