Today’s IT leaders face many challenges in keeping their infrastructure up to date. Maintaining and upgrading on-premise infrastructure consumes time and resources, but migrating to the cloud can seem complex or risky. However, delaying the transition can stifle innovation and delay scientific advances. Scientific research demands a powerful, flexible infrastructure that supports scalable workloads.
Google’s High Performance Computing solutions offer on-demand access to VMs researchers can customize for their needs, with easy onboarding and sustained use discounts to drive down costs. On Google Cloud, each team has access to their own scalable, tailor-made cluster—users can add the exact number of cores and memory required for their workload, helping control their cloud spend without sacrificing performance. This flexibility helps users solve problems faster, reduce queue times for large-batch workloads, and relieve compute resource limitations within the privacy and security of their own Google Cloud accounts. With the latest NVIDIA T4 GPUs and VMs optimized specifically for HPC and machine-learning workloads, Google Cloud provides the compute, storage, and accelerator solutions for today’s data-intensive projects.
“As CIO, I want to make sure that I provide the resources that these researchers and scientists need to do their job…It’s a win-win for us to get on the cloud.”—Roy Sookhoo, Chief Information Officer, SUNY Downstate Medical Center
Read how researchers use Google Cloud to solve challenges and get results for complex workloads.
- Researchers at SUNY Downstate’s Neurosim Lab use Google Cloud Platform’s SLURM integration to seamlessly autoscale their detailed simulations of brain circuits. Using 50,000 cloud-based processors instead of 500 on-site, they reduced their run time from days to hours. According to Salvador Dura-Bernal, Research Assistant Professor at the Neurosim Lab, “running the models on Preemptible VM instances is four times cheaper and allows us to try more hypotheses because we can run the tests faster.” Read the full case study.
- At the University of North Carolina at Chapel Hill, the Research Computing team collaborated with Techila Technologies and Google Cloud to accelerate processing medical images. Their proof-of-concept test case cut the time required for reconstructing images like MRI scans from one week to only three hours. Read the full case study.
- By moving their workflow to Google Compute Engine, a team of biologists at the University of York were able to assemble 60 gigabases of microbial DNA on a virtual 96 core server with expanded memory of nearly four terabytes. “We hadn’t been able to run this workflow at all,” says Dr. Peter Ashton, Head of the Genomics and Bioinformatics Laboratory at York, “but using Google VMs makes this genome assembly possible, accessible to more researchers, and more affordable.” Read the full case study.
Experience HPC on Google Cloud
- Register to attend HPC Day Boston on October 8 and HPC Day DC on October 29 to learn how to manage scalable high-performance computing in the cloud.
- Check out our web page to learn more about how you can use Google Cloud to accelerate your most complex HPC workloads’ time to completion, convert an idea into a discovery, a hypothesis into a cure, or an inspiration into a product.
|Why Google Cloud
By choosing Google Cloud, you build on the same future-proof infrastructure that allows Google to return billions of search results in milliseconds, support more than 500 hours of content uploaded to YouTube every minute, and provide storage for more than 1.5 billion Gmail users.