Utility Supercomputing Heats Up

By Tiffany Trader

February 28, 2013

The HPC in the cloud space continues to evolve and one of the companies leading that charge is Cycle Computing. The utility supercomputing vendor recently reported a record-breaking 2012, punctuated by several impressive big science endeavors. One of Cycle’s most significant projects was the creation of a 50,000-core utility supercomputer inside the Amazon Elastic Compute Cloud.

Built for pharmaceutical companies Schrödinger and Nimbus Discovery, the virtual mega-cluster was able to analyze 21 million drug compounds in just 3 hours for less than $4,900 per hour. The accomplishment caught the attention of IDC analysts Chirag Dekate and Steve Conway, who elected to honor Cycle with their firm’s HPC Innovation Excellence Award.

Research Manager of IDC’s High-Performance Systems Chirag Dekate explained the award recognizes those who have best applied HPC in the ecosystem to solve critical problems. More specifically, IDC is looking for scientific achievement, ROI, and a combination of these two elements.

HPCwire spoke with Cycle CEO Jason Stowe shortly after the award was announced about the growth in HPC cloud and his company. Stowe really sees 2012 as the turning point – both for the space and for Cycle Computing. “We’ve basically hit the hockey stick growth period where there’s more rapid adoption of the technology,” he says. “Relative to utility supercomputing and HPC cloud in general we are definitely seeing a lot of interest in the space.”

During the Amazon Web Services re:Invent show in November, some big-name customers, including Novartis, Johnson & Johnson, Life Technologies, along with Hartford Insurance Group and Pacific Life Insurance, came forward to discuss their use of Cycle’s cluster-building software. The companies highlighted many of their biggest use cases and described how HPC cloud helps move the needle for Fortune500.

“Utility supercomputing applies to a large variety of companies regardless of their industry,” says Stowe, “because it supports business analytics, it supports various forms of engineering simulations and helps get the science done.”

Cycle’s customer base is well-represented across disciplines. “The majority of the top 20 big pharma companies use our software; three of the five largest variable annuity businesses use our software internally and externally or in combination,” says the CEO. The vendor also counts several leading life science companies among its customer base, including Schrödinger, who in addition to their initial 50k core run, continues to use the Cycle-EC2 cluster for ongoing workloads. Manufacturing and energy companies are also plugging into the Cycle cloud.

There are still technical and cultural barriers to cloud adoption, however. Stowe concedes the point, but only half-jokingly he adds that Cycle has solved most of the technical challenges. At this juncture, he believes the lag is more on cultural side, but there are signs of progress.

“We have these traditional companies like Johnson & Johnson and Hartford Life transitioning to a cloud model. That’s a huge cultural indicator, and definitely a C-change from four-to-five years ago,” he says.

Next >> the Business Model

The Business Model

What about the long-term profit potential for a business that relies on data parallel workloads? The question is met with a three-part answer. First off, Stowe says that Cycle has always been profitable. As a bootstrapped company, they have no investors. They’ve built a business off of a real cash-flow stream. Second, he insists that the vast amount of growth in computation is in the area of data-parallel applications.

He considers business analytics, the entirety of big data and a majority of even traditional simulation codes to be strong candidates for the cloud or utility supercomputing model.

“Sure, people still use MPI, they still use fast interconnect – but we have cases (and we hope to publish soon) where folks are running Monte Carlo simulations as a data-parallel problem. There’s a small MPI cluster that’s running the simulation, but the overall structure of the computation is parallel,” says Stowe.

Stowe expects these kinds of data-parallel or high-throughput applications to make up the bulk of new commercial workloads. The activity is coming from a range of verticals: genomics, computational chemistry, even finite element analysis.

Stowe’s final point in the context of MPI applications might be surprising to some. Cycle has seen at least two examples of real-world MPI applications that ran as much as 40 percent better on the Amazon EC2 cloud than on an internal kit that used QDR InfiniBand.

“The only real test of whether or not cloud is right for you is to actually bench it in comparison to the kit you are using in-house,” he advises.

Stowe’s team was not particularly surprised. “A lot of MPI applications under the hood are essentially doing low-interconnect, master-worker kind of workloads,” he adds.

Stowe readily admits there are applications that require the fastest interconnects and highly-tuned systems – “like weather simulations, nuclear bomb testing, the stuff at Oak Ridge or Sandia” – but he contends that some of the newer applications, especially those written in-house or by a domain scientist as opposed to a computer scientist, often run faster on cloud.

“It’s so cheap to do a bench, so why not just verify it. I’m an engineer at heart, so I’m very practical. We can talk about the theory, but it’s hard to argue with results,” he adds.

Next >> Another Tool in the Toolbox

Another Tool in the Toolbox

So much of the discussion around HPC cloud focuses on the so-called I/O problem – the bandwidth and latency challenges associated with a general public cloud like Amazon. “What about performance?” critics will ask.

Stowe feels that questions like this point to cloud necessarily replacing large capability machines, but that’s not how he sees it.

“I think of it as a radically different kind of capability machine,” says Stowe. “The old kind of capability machine required millions of dollars and tons of planning and special environments to be created, heating/cooling/power, expert staff, and so on. These systems are used very heavily for a certain kind of application, and that’s the right thing to do.”

Stowe looks at utility supercomputing as another tool in the toolbox. It doesn’t need to replace traditional capability machines, which will still be needed for certain kinds of applications. In fact, he says you can think of the Cycle-AWS cloud as another kind of capability machine with an attractive set of benefits (on-demand, pay for what you use, scalable, elastic, lower overhead).

It’s a different branch of the same tree, he says.

IDC’s Dekate takes pretty much the same position. He sees HPC in the cloud and dedicated HPC clusters as complementary.

“The HPC ecosystem is diverse and there’s a class of applications that makes sense for utility supercomputing,” says Dekate. “Solving the diverse needs of the user community requires different kinds of technological capabilities, including dedicated hardware infrastructure and HPC cloud frameworks. Our argument is that one does not have to replace the other. It’s more important to find the right kind of matches for applications that work well in either or both of these cases.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with HPE for a new 8-petaflops (peak) supercomputer that will be used to advance early-stage R&D on energy technologies s Read more…

By Tiffany Trader

Training Time Slashed for Deep Learning

August 14, 2018

Fast.ai, an organization offering free courses on deep learning, claimed a new speed record for training a popular image database using Nvidia GPUs running on public cloud infrastructure. A pair of researchers trained Read more…

By George Leopold

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learning. The CERN team demonstrated that AI-based models have the Read more…

By Rob Farber

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Super Problem Solving

You might think that tackling the world’s toughest problems is a job only for superheroes, but at special places such as the Oak Ridge National Laboratory, supercomputers are the real heroes. Read more…

Rigetti Eyes Scaling with 128-Qubit Architecture

August 10, 2018

Rigetti Computing plans to build a 128-qubit quantum computer based on an equivalent quantum processor that leverages emerging hybrid computing algorithms used to test programs and potential applications. Founded in 2 Read more…

By George Leopold

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with HPE for a new 8-petaflops (peak) supercomputer that will be Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

SLATE Update: Making Math Libraries Exascale-ready

August 9, 2018

Practically-speaking, achieving exascale computing requires enabling HPC software to effectively use accelerators – mostly GPUs at present – and that remain Read more…

By John Russell

Summertime in Washington: Some Unexpected Advanced Computing News

August 8, 2018

Summertime in Washington DC is known for its heat and humidity. That is why most people get away to either the mountains or the seashore and things slow down. H Read more…

By Alex R. Larzelere

NSF Invests $15 Million in Quantum STAQ

August 7, 2018

Quantum computing development is in full ascent as global backers aim to transcend the limitations of classical computing by leveraging the magical-seeming prop Read more…

By Tiffany Trader

By the Numbers: Cray Would Like Exascale to Be the Icing on the Cake

August 1, 2018

On its earnings call held for investors yesterday, Cray gave an accounting for its latest quarterly financials, offered future guidance and provided an update o Read more…

By Tiffany Trader

Google is First Partner in NIH’s STRIDES Effort to Speed Discovery in the Cloud

July 31, 2018

The National Institutes of Health, with the help of Google, last week launched STRIDES - Science and Technology Research Infrastructure for Discovery, Experimen Read more…

By John Russell

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This