Supercomputer in the Cloud Speeds Biotech Research

By Nicole Hemsoth

April 8, 2011

With one of the largest biotechnology events in the world just around the corner, an announcement that provides details about a supercomputer dedicated exclusively to a biosciences application is bound to draw some attention. However, when that machine could be placed in the top 100 of the Top500 list of the world’s most powerful clusters and exists only in the cloud, it is certainly worth taking a second look at.

For some scientists like Jacob Corn, an associate research scientist at Genetech, it’s quite frustrating to have code and experiments ready to roll yet be stuck waiting for relatively low core-count resources to chew through to final results.

Although clouds are so often discussed in the context of the avoidance of up-front hardware expenditures, the silver lining for many scientists with nicely parallel workloads is that they can scale to the sky to improve time to results, assuming of course that the management is stable at such counts.

When Corn and his colleagues received access to a “supercomputer in the cloud” which its creators have stated is on par with a Top500 cluster, the researchers were able to eliminate the lag time and speed their time to lab-ready results with 10,000 cores at their bidding.

When the job had finished running in a slight fraction of the time that was typical, the team shut down the virtual cluster just as quickly as they scaled up–a privilege reserved for cloud users, as they can save the expense of maintaining hefty hardware on-site for such infrequent, high demands.

The avoidance of a direct capital investment and maintenance of massive amounts of hardware is a strong pro-cloud argument for some, but scientists like Corn are seeing time-sensitivity as a top driver.

While it might have otherwise taken several weeks to get the results of a run back before they could be validated in a lab, his team was able to get results back in eight hours. With the help of 10k cores, courtesy of Amazon’s hardware coupled with CycleCloud, the software force behind Cycle Computing’s HPC cloud service, Corn found that removing the wait time for computation sped research along and allowed for more streamlined, efficient use his team’s time.

Genetech, which is a subset of pharmaceutical giant, Roche, contracted Cycle Computing to help them with these 80,000 compute hours crunching on the molecular dynamics application at the heart of Jacob Corn’s research. Corn told HPC in the Cloud that this type of application is ideally suited to run in the cloud as it will continue to perform better as more cores are piled on.

As Corn noted, “if at any point we wanted to make it faster on our in-house machines, we’d just keep buying more and more computers. With this kind of embarrassingly parallel application, basically if you add 50 cores it will run 50 times faster and so on. With the cloud now we just pop the numbers in the request box and we can immediately have what need.

As a CPU-bound application without a great deal of communication between the nodes or major data size concerns, it ran particularly well on an Amazon EC2 C1 Extra Large Instance type (c1.xlarge in API-speak) versus one of the more robust and expensive HPC-flavored instances that tout stronger 10GbE interconnects.

The scientist’s only standout concern was about the security and data protection issue but he said his IT team was completely confident about the level of security that was being provided. As he stated “they could ensure that everything was secure in the back and forth and that they could ensure protection and scrubbing of the results.”

He went into detail about the time-critical element that made clouds attractive, stating: 

 “Our internal clusters are running jobs that aren’t as time sensitive as others; they’re things we don’t need the answer to immediately. With some of our research, however, we sometimes have code and experiments all ready to run but we end up waiting for the computation to complete. For me, it usually takes the same amount of time to write the code for an experiment as it does to actually get the results back from the computation end before we can take the results into a lab setting to verify. Now that whole end of time is cut out so basically things can go from an idea in my head to the time it takes to write the code then to the results.”

Corn stressed again that the same job that Cycle Computing handled would have taken more than a month and finished completely in 8 hours for just under $9000.

He said as well that another appealing aspect of the cloud for his company is that it’s rare any of the scientists would ever need 10,000 cores most days. When their jobs have finished they just shut down the resources and incur no charges and don’t have the albatross of burdensome hardware to maintain, cool, support, and so on. 

Some might be able to predict in general what a 10,000-core cluster would average cost-wise, both in terms of the hardware, power, cooling and manpower to feed it regularly. By using a cloud-based supercomputer, however, once Genetech had crunched through to the core of its mission, it simply powered down the virtual instances and stopped incurring charges.  When considering the numerous up-front and recurring investments in hardware with a similar cluster, it is worth noting that utilization worries are also solved given the roll back when the job completes.

Still, while it might sound simple to spin up a cluster on cloud-based resources, it takes some serious expertise when moving so high up the core-count ladder. Jason Stowe, CEO and founder of Cycle Computing, which provisioned and managed every aspect of Genetech’s cluster, weighed in on cloud infrastructure for HPC. He says that to scale well in the cloud, it takes some serious support. Amazon Web Services provides the bare infrastructure but after that, support is severely limited.

One of Stowe’s big claims about the cluster his company spun up for Genetech is that based on core count (not benchmarks) this cluster is on par with #74 on the Top500 list of the most powerful supercomputers. He notes that in that range on the list the core counts are lower but the interconnects are much faster—a fact that didn’t matter much to Genetech for this type of application that required little messaging.

Cycle released some rather extensive details about the process behind provisioning this 10,000-core supercomputer on Amazon EC2 that allowed the biosciences company running their molecular dynamics application to scale up resources and perform thousands of tests in around eight hours for $1,060 per hour.

In mid-March of this year, Cycle Computing shared some lessons learned from building a 4096-core cloud-based supercomputer, which built upon some of their previous work setting up a 2000-core cluster.  The team found that while there were challenges in making sure the configuration management software could keep pace, the schedulers could scale, and the price and performance could be kept, bumping the core count higher was possible.

Stowe insists that the news about these clusters brought some new users their way who were looking for secure, encrypted clusters that were able to support a range of schedulers (Grid Engine, PBS, Condor, etc.) and provided the scalability and managed environment needed for HPC applications.

Cycle Computing does appear to be doing some bang-up business with HPC customers over the last couple of years, managing to spin up some impressive clusters on EC2 to run workloads ranging from bioinformatics to complex simulations. The company, which has been around for around six years, was founded by Jason Stowe, who began by helping companies make use of Condor for grid management purposes.

During a conversation in advance of the announcement, Stowe noted that one of the notable elements is that on the user side, there isn’t much work to be done—users simply click to get the cluster running and have access to the full cluster in under an hour with no capital investment.

While there were some significant management and node-specific issues the company encountered during the creation and massive scaling of cloud-based clusters, Stowe says that the company kept fine-tuning their CycleCloud software with each experience.

He notes that the 10,000 core experiment for Genetech wasn’t just to serve their specific needs–it was something of a proof of concept to show that their tools could scale gracefully and reliably. The team has consistently made adjustments to CycleCloud and CycleServer, finding for instance that Torque wasn’t quite as efficient as Condor. They also were able to build on their experiences with Purdue University’s 40,000 core system in addition to contracts with life sciences companies and users in a number of other HPC-heavy fields.

Cycle Computing has a number of customers in the life sciences arena that make use of Amazon’s cloud. They’ve also been working with others in financial services, engineering, insurance and a number of industries with complex computational demands.

As Stowe said of the recent accomplishment, “With this repeatable 10,000 core cluster under our belts, our team is already working on the next generation of secure, mega-elastic and fully-supported cloud clusters that are both timeframe and bottom-line friendly.”
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Quantinuum Reports 99.9% 2-Qubit Gate Fidelity, Caps Eventful 2 Months

April 16, 2024

March and April have been good months for Quantinuum, which today released a blog announcing the ion trap quantum computer specialist has achieved a 99.9% (three nines) two-qubit gate fidelity on its H1 system. The lates Read more…

Mystery Solved: Intel’s Former HPC Chief Now Running Software Engineering Group 

April 15, 2024

Last year, Jeff McVeigh, Intel's readily available leader of the high-performance computing group, suddenly went silent, with no interviews granted or appearances at press conferences.  It led to questions -- what's Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Institute for Human-Centered AI (HAI) put out a yearly report to t Read more…

Crossing the Quantum Threshold: The Path to 10,000 Qubits

April 15, 2024

Editor’s Note: Why do qubit count and quality matter? What’s the difference between physical qubits and logical qubits? Quantum computer vendors toss these terms and numbers around as indicators of the strengths of t Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips are available off the shelf, a concern raised at many recent Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announced its second fund targeting €200 million. The very idea th Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Computational Chemistry Needs To Be Sustainable, Too

April 8, 2024

A diverse group of computational chemists is encouraging the research community to embrace a sustainable software ecosystem. That's the message behind a recent Read more…

Hyperion Research: Eleven HPC Predictions for 2024

April 4, 2024

HPCwire is happy to announce a new series with Hyperion Research  - a fact-based market research firm focusing on the HPC market. In addition to providing mark Read more…

Google Making Major Changes in AI Operations to Pull in Cash from Gemini

April 4, 2024

Over the last week, Google has made some under-the-radar changes, including appointing a new leader for AI development, which suggests the company is taking its Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Leading Solution Providers

Contributors

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire