Utility Supercomputing Heats Up

By Nicole Hemsoth

February 28, 2013

The HPC in the cloud space continues to evolve and one of the companies leading that charge is Cycle Computing. The utility supercomputing vendor recently reported a record-breaking 2012, punctuated by several impressive big science endeavors. One of Cycle’s most significant projects was the creation of a 50,000-core utility supercomputer inside the Amazon Elastic Compute Cloud.

Built for pharmaceutical companies Schrödinger and Nimbus Discovery, the virtual mega-cluster was able to analyze 21 million drug compounds in just 3 hours for less than $4,900 per hour. The accomplishment caught the attention of IDC analysts Chirag Dekate and Steve Conway, who elected to honor Cycle with their firm’s HPC Innovation Excellence Award.

Research Manager of IDC’s High-Performance Systems Chirag Dekate explained the award recognizes those who have best applied HPC in the ecosystem to solve critical problems. More specifically, IDC is looking for scientific achievement, ROI, and a combination of these two elements.

HPCwire spoke with Cycle CEO Jason Stowe shortly after the award was announced about the growth in HPC cloud and his company. Stowe really sees 2012 as the turning point – both for the space and for Cycle Computing. “We’ve basically hit the hockey stick growth period where there’s more rapid adoption of the technology,” he says. “Relative to utility supercomputing and HPC cloud in general we are definitely seeing a lot of interest in the space.”

During the Amazon Web Services re:Invent show in November, some big-name customers, including Novartis, Johnson & Johnson, Life Technologies, along with Hartford Insurance Group and Pacific Life Insurance, came forward to discuss their use of Cycle’s cluster-building software. The companies highlighted many of their biggest use cases and described how HPC cloud helps move the needle for Fortune500.

“Utility supercomputing applies to a large variety of companies regardless of their industry,” says Stowe, “because it supports business analytics, it supports various forms of engineering simulations and helps get the science done.”

Cycle’s customer base is well-represented across disciplines. “The majority of the top 20 big pharma companies use our software; three of the five largest variable annuity businesses use our software internally and externally or in combination,” says the CEO. The vendor also counts several leading life science companies among its customer base, including Schrödinger, who in addition to their initial 50k core run, continues to use the Cycle-EC2 cluster for ongoing workloads. Manufacturing and energy companies are also plugging into the Cycle cloud.

There are still technical and cultural barriers to cloud adoption, however. Stowe concedes the point, but only half-jokingly he adds that Cycle has solved most of the technical challenges. At this juncture, he believes the lag is more on cultural side, but there are signs of progress.

“We have these traditional companies like Johnson & Johnson and Hartford Life transitioning to a cloud model. That’s a huge cultural indicator, and definitely a C-change from four-to-five years ago,” he says.

Next >> the Business Model

The Business Model

What about the long-term profit potential for a business that relies on data parallel workloads? The question is met with a three-part answer. First off, Stowe says that Cycle has always been profitable. As a bootstrapped company, they have no investors. They’ve built a business off of a real cash-flow stream. Second, he insists that the vast amount of growth in computation is in the area of data-parallel applications.

He considers business analytics, the entirety of big data and a majority of even traditional simulation codes to be strong candidates for the cloud or utility supercomputing model.

“Sure, people still use MPI, they still use fast interconnect – but we have cases (and we hope to publish soon) where folks are running Monte Carlo simulations as a data-parallel problem. There’s a small MPI cluster that’s running the simulation, but the overall structure of the computation is parallel,” says Stowe.

Stowe expects these kinds of data-parallel or high-throughput applications to make up the bulk of new commercial workloads. The activity is coming from a range of verticals: genomics, computational chemistry, even finite element analysis.

Stowe’s final point in the context of MPI applications might be surprising to some. Cycle has seen at least two examples of real-world MPI applications that ran as much as 40 percent better on the Amazon EC2 cloud than on an internal kit that used QDR InfiniBand.

“The only real test of whether or not cloud is right for you is to actually bench it in comparison to the kit you are using in-house,” he advises.

Stowe’s team was not particularly surprised. “A lot of MPI applications under the hood are essentially doing low-interconnect, master-worker kind of workloads,” he adds.

Stowe readily admits there are applications that require the fastest interconnects and highly-tuned systems – “like weather simulations, nuclear bomb testing, the stuff at Oak Ridge or Sandia” – but he contends that some of the newer applications, especially those written in-house or by a domain scientist as opposed to a computer scientist, often run faster on cloud.

“It’s so cheap to do a bench, so why not just verify it. I’m an engineer at heart, so I’m very practical. We can talk about the theory, but it’s hard to argue with results,” he adds.

Next >> Another Tool in the Toolbox

Another Tool in the Toolbox

So much of the discussion around HPC cloud focuses on the so-called I/O problem – the bandwidth and latency challenges associated with a general public cloud like Amazon. “What about performance?” critics will ask.

Stowe feels that questions like this point to cloud necessarily replacing large capability machines, but that’s not how he sees it.

“I think of it as a radically different kind of capability machine,” says Stowe. “The old kind of capability machine required millions of dollars and tons of planning and special environments to be created, heating/cooling/power, expert staff, and so on. These systems are used very heavily for a certain kind of application, and that’s the right thing to do.”

Stowe looks at utility supercomputing as another tool in the toolbox. It doesn’t need to replace traditional capability machines, which will still be needed for certain kinds of applications. In fact, he says you can think of the Cycle-AWS cloud as another kind of capability machine with an attractive set of benefits (on-demand, pay for what you use, scalable, elastic, lower overhead).

It’s a different branch of the same tree, he says.

IDC’s Dekate takes pretty much the same position. He sees HPC in the cloud and dedicated HPC clusters as complementary.

“The HPC ecosystem is diverse and there’s a class of applications that makes sense for utility supercomputing,” says Dekate. “Solving the diverse needs of the user community requires different kinds of technological capabilities, including dedicated hardware infrastructure and HPC cloud frameworks. Our argument is that one does not have to replace the other. It’s more important to find the right kind of matches for applications that work well in either or both of these cases.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

TACC Helps ROSIE Bioscience Gateway Expand its Impact

April 26, 2017

Biomolecule structure prediction has long been challenging not least because the relevant software and workflows often require high end HPC systems that many bioscience researchers lack easy access to. Read more…

By John Russell

Messina Update: The U.S. Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

IBM, Nvidia, Stone Ridge Claim Gas & Oil Simulation Record

April 25, 2017

IBM, Nvidia, and Stone Ridge Technology today reported setting the performance record for a “billion cell” oil and gas reservoir simulation. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Remote Visualization Optimizing Life Sciences Operations and Care Delivery

As patients continually demand a better quality of care and increasingly complex workloads challenge healthcare organizations to innovate, investing in the right technologies is key to ensuring growth and success. Read more…

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

Musk’s Latest Startup Eyes Brain-Computer Links

April 21, 2017

Elon Musk, the auto and space entrepreneur and severe critic of artificial intelligence, is forming a new venture that reportedly will seek to develop an interface between the human brain and computers. Read more…

By George Leopold

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Messina Update: The U.S. Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Hyperion (IDC) Paints a Bullish Picture of HPC Future

April 20, 2017

Hyperion Research – formerly IDC’s HPC group – yesterday painted a fascinating and complicated portrait of the HPC community’s health and prospects at the HPC User Forum held in Albuquerque, NM. HPC sales are up and growing ($22 billion, all HPC segments, 2016). Read more…

By John Russell

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" processors. The infrastructure is based on the 68-core Intel Knights Landing processor with integrated Omni-Path fabric (the 7250F Xeon Phi). Read more…

By Tiffany Trader

CERN openlab Explores New CPU/FPGA Processing Solutions

April 14, 2017

Through a CERN openlab project known as the ‘High-Throughput Computing Collaboration,’ researchers are investigating the use of various Intel technologies in data filtering and data acquisition systems. Read more…

By Linda Barney

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference phase of neural networks (NN). Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Leading Solution Providers

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This