With one of the largest biotechnology events in the world just around the corner, an announcement that provides details about a supercomputer dedicated exclusively to a biosciences application is bound to draw some attention. However, when that machine could be placed in the top 100 of the Top500 list of the world’s most powerful clusters and exists only in the cloud, it is certainly worth taking a second look at.
For some scientists like Jacob Corn, an associate research scientist at Genetech, it’s quite frustrating to have code and experiments ready to roll yet be stuck waiting for relatively low core-count resources to chew through to final results.
Although clouds are so often discussed in the context of the avoidance of up-front hardware expenditures, the silver lining for many scientists with nicely parallel workloads is that they can scale to the sky to improve time to results, assuming of course that the management is stable at such counts.
When Corn and his colleagues received access to a “supercomputer in the cloud” which its creators have stated is on par with a Top500 cluster, the researchers were able to eliminate the lag time and speed their time to lab-ready results with 10,000 cores at their bidding.
When the job had finished running in a slight fraction of the time that was typical, the team shut down the virtual cluster just as quickly as they scaled up–a privilege reserved for cloud users, as they can save the expense of maintaining hefty hardware on-site for such infrequent, high demands.
The avoidance of a direct capital investment and maintenance of massive amounts of hardware is a strong pro-cloud argument for some, but scientists like Corn are seeing time-sensitivity as a top driver.
While it might have otherwise taken several weeks to get the results of a run back before they could be validated in a lab, his team was able to get results back in eight hours. With the help of 10k cores, courtesy of Amazon’s hardware coupled with CycleCloud, the software force behind Cycle Computing’s HPC cloud service, Corn found that removing the wait time for computation sped research along and allowed for more streamlined, efficient use his team’s time.
Genetech, which is a subset of pharmaceutical giant, Roche, contracted Cycle Computing to help them with these 80,000 compute hours crunching on the molecular dynamics application at the heart of Jacob Corn’s research. Corn told HPC in the Cloud that this type of application is ideally suited to run in the cloud as it will continue to perform better as more cores are piled on.
As Corn noted, “if at any point we wanted to make it faster on our in-house machines, we’d just keep buying more and more computers. With this kind of embarrassingly parallel application, basically if you add 50 cores it will run 50 times faster and so on. With the cloud now we just pop the numbers in the request box and we can immediately have what need.
As a CPU-bound application without a great deal of communication between the nodes or major data size concerns, it ran particularly well on an Amazon EC2 C1 Extra Large Instance type (c1.xlarge in API-speak) versus one of the more robust and expensive HPC-flavored instances that tout stronger 10GbE interconnects.
The scientist’s only standout concern was about the security and data protection issue but he said his IT team was completely confident about the level of security that was being provided. As he stated “they could ensure that everything was secure in the back and forth and that they could ensure protection and scrubbing of the results.”
He went into detail about the time-critical element that made clouds attractive, stating:
“Our internal clusters are running jobs that aren’t as time sensitive as others; they’re things we don’t need the answer to immediately. With some of our research, however, we sometimes have code and experiments all ready to run but we end up waiting for the computation to complete. For me, it usually takes the same amount of time to write the code for an experiment as it does to actually get the results back from the computation end before we can take the results into a lab setting to verify. Now that whole end of time is cut out so basically things can go from an idea in my head to the time it takes to write the code then to the results.”
Corn stressed again that the same job that Cycle Computing handled would have taken more than a month and finished completely in 8 hours for just under $9000.
He said as well that another appealing aspect of the cloud for his company is that it’s rare any of the scientists would ever need 10,000 cores most days. When their jobs have finished they just shut down the resources and incur no charges and don’t have the albatross of burdensome hardware to maintain, cool, support, and so on.
Some might be able to predict in general what a 10,000-core cluster would average cost-wise, both in terms of the hardware, power, cooling and manpower to feed it regularly. By using a cloud-based supercomputer, however, once Genetech had crunched through to the core of its mission, it simply powered down the virtual instances and stopped incurring charges. When considering the numerous up-front and recurring investments in hardware with a similar cluster, it is worth noting that utilization worries are also solved given the roll back when the job completes.
Still, while it might sound simple to spin up a cluster on cloud-based resources, it takes some serious expertise when moving so high up the core-count ladder. Jason Stowe, CEO and founder of Cycle Computing, which provisioned and managed every aspect of Genetech’s cluster, weighed in on cloud infrastructure for HPC. He says that to scale well in the cloud, it takes some serious support. Amazon Web Services provides the bare infrastructure but after that, support is severely limited.
One of Stowe’s big claims about the cluster his company spun up for Genetech is that based on core count (not benchmarks) this cluster is on par with #74 on the Top500 list of the most powerful supercomputers. He notes that in that range on the list the core counts are lower but the interconnects are much faster—a fact that didn’t matter much to Genetech for this type of application that required little messaging.
Cycle released some rather extensive details about the process behind provisioning this 10,000-core supercomputer on Amazon EC2 that allowed the biosciences company running their molecular dynamics application to scale up resources and perform thousands of tests in around eight hours for $1,060 per hour.
In mid-March of this year, Cycle Computing shared some lessons learned from building a 4096-core cloud-based supercomputer, which built upon some of their previous work setting up a 2000-core cluster. The team found that while there were challenges in making sure the configuration management software could keep pace, the schedulers could scale, and the price and performance could be kept, bumping the core count higher was possible.
Stowe insists that the news about these clusters brought some new users their way who were looking for secure, encrypted clusters that were able to support a range of schedulers (Grid Engine, PBS, Condor, etc.) and provided the scalability and managed environment needed for HPC applications.
Cycle Computing does appear to be doing some bang-up business with HPC customers over the last couple of years, managing to spin up some impressive clusters on EC2 to run workloads ranging from bioinformatics to complex simulations. The company, which has been around for around six years, was founded by Jason Stowe, who began by helping companies make use of Condor for grid management purposes.
During a conversation in advance of the announcement, Stowe noted that one of the notable elements is that on the user side, there isn’t much work to be done—users simply click to get the cluster running and have access to the full cluster in under an hour with no capital investment.
While there were some significant management and node-specific issues the company encountered during the creation and massive scaling of cloud-based clusters, Stowe says that the company kept fine-tuning their CycleCloud software with each experience.
He notes that the 10,000 core experiment for Genetech wasn’t just to serve their specific needs–it was something of a proof of concept to show that their tools could scale gracefully and reliably. The team has consistently made adjustments to CycleCloud and CycleServer, finding for instance that Torque wasn’t quite as efficient as Condor. They also were able to build on their experiences with Purdue University’s 40,000 core system in addition to contracts with life sciences companies and users in a number of other HPC-heavy fields.
Cycle Computing has a number of customers in the life sciences arena that make use of Amazon’s cloud. They’ve also been working with others in financial services, engineering, insurance and a number of industries with complex computational demands.
As Stowe said of the recent accomplishment, “With this repeatable 10,000 core cluster under our belts, our team is already working on the next generation of secure, mega-elastic and fully-supported cloud clusters that are both timeframe and bottom-line friendly.”