Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

August 20, 2013

Cloud HPC Firm Dares Scientists to Ask Big Questions

Alex Woodie

Cloud-based supercomputing is, theoretically, a great idea, but the trend has not taken off as some in the HPC field believed it would. That isn’t stopping the folks at Cycle Computing, who say its Amazon-based supercomputers are not only helping scientists and researchers get real work done, but freeing their brains to ask the really big questions.

Scientific creativity is being hamstrung by the finite resources of traditional fixed-size supercomputing infrastructures, Cycle Computing CEO Jason Stowe said in a recent video. While all kinds of advances are being made in the HPC arena–particularly on the software side–all too often, scientists and researchers can’t adequately explore their ideas or ask the big questions due to a sheer lack of HPC capacity.

“We end up with an innovation bottleneck with today’s fixed-sized clusters,” Stowe said in the video. “We get into a long term habit [where] many researchers and engineers are essentially forced to subtlety confine the questions they ask to the 256 cores of infrastructure that they were able to afford last year. 

“And this is not what we want,” Stowe continued. “We want researchers asking the big question, asking the one that will move their science forward, move their business forward, fundamentally push humanity forward, and fixed-size internal infrastructure is really bad at that.” 

Cloud-based HPC resources, such as the ones that Cycle Computing enable, are a much better approach to solving complex scientific and engineering problems, particularly for researchers at smaller institutions who don’t have access to big supercomputers, Stowe said.

“We think that cloud will enable us to put supercomputers at researchers’ fingertips,” he said. “They’ll be able to push buttons and build tens of thousands of core clusters that will fundamentally change the category of science that they’re able to answer, the types of business insights that they’ll get from analytics, and the types of simulations that they’ll be able to run.”

Part and parcel of this approach is movement toward thinking in “dollars per unit of science.” Cycle Computing has several examples of how its customers were able to apply a large number of CPUs to address a particular scientific challenge for a particular cost.

For example, a large pharmaceutical company built a 10,600 server instance with Cycle Computing’s utility HPC solution. Instead of acquiring the 14,400 square feet of data center space and incurring the cost to build in-house at $44 Million, Cycle Computing was able to provision it from Amazon’s AWS cloud in about two hours. “They ran 40 years of science in 11 hours for $4,372,” Stowe said.

While the HPC community puts a lot of focus on the big supercomputers that make the Top 500 list, that isn’t necessarily the most important news, Stowe said. “The workloads are changing,” he said. “It’s no longer about absolute performance. It’s about throughput. We don’t care about the run time of the individual jobs. We care about the run time of workloads. And modern science workloads are all pleasantly parallel.”

Related Articles

UberCloud Preps for Round Four of Cloud HPC Experiments

Spider II Emerges to Give ORNL a Big Speed Boost

Cray Supercomputer Gave Forecasters an Edge in Tornado Prediction

Share This