Utility Supercomputing Heats Up

By Tiffany Trader

February 28, 2013

The HPC in the cloud space continues to evolve and one of the companies leading that charge is Cycle Computing. The utility supercomputing vendor recently reported a record-breaking 2012, punctuated by several impressive big science endeavors. One of Cycle’s most significant projects was the creation of a 50,000-core utility supercomputer inside the Amazon Elastic Compute Cloud.

Built for pharmaceutical companies Schrödinger and Nimbus Discovery, the virtual mega-cluster was able to analyze 21 million drug compounds in just 3 hours for less than $4,900 per hour. The accomplishment caught the attention of IDC analysts Chirag Dekate and Steve Conway, who elected to honor Cycle with their firm’s HPC Innovation Excellence Award.

Research Manager of IDC’s High-Performance Systems Chirag Dekate explained the award recognizes those who have best applied HPC in the ecosystem to solve critical problems. More specifically, IDC is looking for scientific achievement, ROI, and a combination of these two elements.

HPCwire spoke with Cycle CEO Jason Stowe shortly after the award was announced about the growth in HPC cloud and his company. Stowe really sees 2012 as the turning point – both for the space and for Cycle Computing. “We’ve basically hit the hockey stick growth period where there’s more rapid adoption of the technology,” he says. “Relative to utility supercomputing and HPC cloud in general we are definitely seeing a lot of interest in the space.”

During the Amazon Web Services re:Invent show in November, some big-name customers, including Novartis, Johnson & Johnson, Life Technologies, along with Hartford Insurance Group and Pacific Life Insurance, came forward to discuss their use of Cycle’s cluster-building software. The companies highlighted many of their biggest use cases and described how HPC cloud helps move the needle for Fortune500.

“Utility supercomputing applies to a large variety of companies regardless of their industry,” says Stowe, “because it supports business analytics, it supports various forms of engineering simulations and helps get the science done.”

Cycle’s customer base is well-represented across disciplines. “The majority of the top 20 big pharma companies use our software; three of the five largest variable annuity businesses use our software internally and externally or in combination,” says the CEO. The vendor also counts several leading life science companies among its customer base, including Schrödinger, who in addition to their initial 50k core run, continues to use the Cycle-EC2 cluster for ongoing workloads. Manufacturing and energy companies are also plugging into the Cycle cloud.

There are still technical and cultural barriers to cloud adoption, however. Stowe concedes the point, but only half-jokingly he adds that Cycle has solved most of the technical challenges. At this juncture, he believes the lag is more on cultural side, but there are signs of progress.

“We have these traditional companies like Johnson & Johnson and Hartford Life transitioning to a cloud model. That’s a huge cultural indicator, and definitely a C-change from four-to-five years ago,” he says.

Next >> the Business Model

The Business Model

What about the long-term profit potential for a business that relies on data parallel workloads? The question is met with a three-part answer. First off, Stowe says that Cycle has always been profitable. As a bootstrapped company, they have no investors. They’ve built a business off of a real cash-flow stream. Second, he insists that the vast amount of growth in computation is in the area of data-parallel applications.

He considers business analytics, the entirety of big data and a majority of even traditional simulation codes to be strong candidates for the cloud or utility supercomputing model.

“Sure, people still use MPI, they still use fast interconnect – but we have cases (and we hope to publish soon) where folks are running Monte Carlo simulations as a data-parallel problem. There’s a small MPI cluster that’s running the simulation, but the overall structure of the computation is parallel,” says Stowe.

Stowe expects these kinds of data-parallel or high-throughput applications to make up the bulk of new commercial workloads. The activity is coming from a range of verticals: genomics, computational chemistry, even finite element analysis.

Stowe’s final point in the context of MPI applications might be surprising to some. Cycle has seen at least two examples of real-world MPI applications that ran as much as 40 percent better on the Amazon EC2 cloud than on an internal kit that used QDR InfiniBand.

“The only real test of whether or not cloud is right for you is to actually bench it in comparison to the kit you are using in-house,” he advises.

Stowe’s team was not particularly surprised. “A lot of MPI applications under the hood are essentially doing low-interconnect, master-worker kind of workloads,” he adds.

Stowe readily admits there are applications that require the fastest interconnects and highly-tuned systems – “like weather simulations, nuclear bomb testing, the stuff at Oak Ridge or Sandia” – but he contends that some of the newer applications, especially those written in-house or by a domain scientist as opposed to a computer scientist, often run faster on cloud.

“It’s so cheap to do a bench, so why not just verify it. I’m an engineer at heart, so I’m very practical. We can talk about the theory, but it’s hard to argue with results,” he adds.

Next >> Another Tool in the Toolbox

Another Tool in the Toolbox

So much of the discussion around HPC cloud focuses on the so-called I/O problem – the bandwidth and latency challenges associated with a general public cloud like Amazon. “What about performance?” critics will ask.

Stowe feels that questions like this point to cloud necessarily replacing large capability machines, but that’s not how he sees it.

“I think of it as a radically different kind of capability machine,” says Stowe. “The old kind of capability machine required millions of dollars and tons of planning and special environments to be created, heating/cooling/power, expert staff, and so on. These systems are used very heavily for a certain kind of application, and that’s the right thing to do.”

Stowe looks at utility supercomputing as another tool in the toolbox. It doesn’t need to replace traditional capability machines, which will still be needed for certain kinds of applications. In fact, he says you can think of the Cycle-AWS cloud as another kind of capability machine with an attractive set of benefits (on-demand, pay for what you use, scalable, elastic, lower overhead).

It’s a different branch of the same tree, he says.

IDC’s Dekate takes pretty much the same position. He sees HPC in the cloud and dedicated HPC clusters as complementary.

“The HPC ecosystem is diverse and there’s a class of applications that makes sense for utility supercomputing,” says Dekate. “Solving the diverse needs of the user community requires different kinds of technological capabilities, including dedicated hardware infrastructure and HPC cloud frameworks. Our argument is that one does not have to replace the other. It’s more important to find the right kind of matches for applications that work well in either or both of these cases.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

PRACEdays Reflects Europe’s HPC Commitment

May 25, 2017

More than 250 attendees and participants came together for PRACEdays17 in Barcelona last week, part of the European HPC Summit Week 2017, held May 15-19 at t Read more…

By Tiffany Trader

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurr Read more…

By Doug Black

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Nvidia CEO Predicts AI ‘Cambrian Explosion’

May 25, 2017

The processing power and cloud access to developer tools used to train machine-learning models are making artificial intelligence ubiquitous across computing pl Read more…

By George Leopold

HPE Extreme Performance Solutions

Exploring the Three Models of Remote Visualization

The explosion of data and advancement of digital technologies are dramatically changing the way many companies do business. With the help of high performance computing (HPC) solutions and data analytics platforms, manufacturers are developing products faster, healthcare providers are improving patient care, and energy companies are improving planning, exploration, and production. Read more…

PGAS Use will Rise on New H/W Trends, Says Reinders

May 25, 2017

If you have not already tried using PGAS, it is time to consider adding PGAS to the programming techniques you know. Partitioned Global Array Space, commonly kn Read more…

By James Reinders

Exascale Escapes 2018 Budget Axe; Rest of Science Suffers

May 23, 2017

President Trump's proposed $4.1 trillion FY 2018 budget is good for U.S. exascale computing development, but grim for the rest of science and technology spend Read more…

By Tiffany Trader

Hedge Funds (with Supercomputing help) Rank First Among Investors

May 22, 2017

In case you didn’t know, The Quants Run Wall Street Now, or so says a headline in today’s Wall Street Journal. Quant-run hedge funds now control the largest Read more…

By John Russell

IBM, D-Wave Report Quantum Computing Advances

May 18, 2017

IBM said this week it has built and tested a pair of quantum computing processors, including a prototype of a commercial version. That progress follows an an Read more…

By George Leopold

PRACEdays Reflects Europe’s HPC Commitment

May 25, 2017

More than 250 attendees and participants came together for PRACEdays17 in Barcelona last week, part of the European HPC Summit Week 2017, held May 15-19 at t Read more…

By Tiffany Trader

PGAS Use will Rise on New H/W Trends, Says Reinders

May 25, 2017

If you have not already tried using PGAS, it is time to consider adding PGAS to the programming techniques you know. Partitioned Global Array Space, commonly kn Read more…

By James Reinders

Exascale Escapes 2018 Budget Axe; Rest of Science Suffers

May 23, 2017

President Trump's proposed $4.1 trillion FY 2018 budget is good for U.S. exascale computing development, but grim for the rest of science and technology spend Read more…

By Tiffany Trader

Cray Offers Supercomputing as a Service, Targets Biotechs First

May 16, 2017

Leading supercomputer vendor Cray and datacenter/cloud provider the Markley Group today announced plans to jointly deliver supercomputing as a service. The init Read more…

By John Russell

HPE’s Memory-centric The Machine Coming into View, Opens ARMs to 3rd-party Developers

May 16, 2017

Announced three years ago, HPE’s The Machine is said to be the largest R&D program in the venerable company’s history, one that could be progressing tow Read more…

By Doug Black

What’s Up with Hyperion as It Transitions From IDC?

May 15, 2017

If you’re wondering what’s happening with Hyperion Research – formerly the IDC HPC group – apparently you are not alone, says Steve Conway, now senior V Read more…

By John Russell

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

HPE Launches Servers, Services, and Collaboration at GTC

May 10, 2017

Hewlett Packard Enterprise (HPE) today launched a new liquid cooled GPU-driven Apollo platform based on SGI ICE architecture, a new collaboration with NVIDIA, a Read more…

By John Russell

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Last week, Google reported that its custom ASIC Tensor Processing Unit (TPU) was 15-30x faster for inferencing workloads than Nvidia's K80 GPU (see our coverage Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

Since our first formal product releases of OSPRay and OpenSWR libraries in 2016, CPU-based Software Defined Visualization (SDVis) has achieved wide-spread adopt Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a ne Read more…

By Tiffany Trader

Leading Solution Providers

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which w Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling Read more…

By Steve Campbell

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Eng Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

As China continues to prove its supercomputing mettle via the Top500 list and the forward march of its ambitious plans to stand up an exascale machine by 2020, Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu's Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural networ Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular Read more…

By John Russell

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of "quantum supremacy," researchers are stretching the limits of today's most advance Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This