NVIDIA Takes Aim at GPU Acceleration for Bioscience Applications

By Michael Feldman

January 14, 2010

NVIDIA has announced the Tesla Bio Workbench, a new program designed to bring together the computational components needed to run GPU-accelerated bioscience applications. The rationale is the same one NVIDIA’s been touting ever since it got into the high performance computing business: take advantage of the superior performance of the GPU in order to lower the entry point for HPC. In this case, they’ve assembled a GPU-centric workbench specifically designed for life science researchers and scientists.

In a nutshell, the Tesla Bio Workbench includes of an array of GPU-capable bioscience codes, a community Web site for downloading the codes and providing a forum for exchanging information, and, of course, recommendations for NVIDIA Tesla GPU -equipped workstations and clusters. The strategy is to educate the biotech community that applications and hardware are here and within the reach of more researchers than ever before.

Over the past couple of years, the application set for computational biology codes that are GPU friendly has grown tremendously, thanks mainly to CUDA ports of the CPU versions of the software. This has produced a large number of popular molecular dynamics and quantum chemistry software packages that can now be run on NVIDIA GPUs. These include such codes as AMBER, GROMACS, NAMD, TeraChem, and VMD, among others. A number of bioinformatics codes like CUDA-SW++ (Smith-Waterman), GPU-HMMER, and MUMmerGPU, are also available. All of these can be downloaded via the Tesla Bio Workbench from their respective owner sites. Many of these can be had free of charge, especially if their use is limited to academic research.

The motivation behind all this is NVIDIA’s recognition that computational biology is one of the lowest hanging fruits for GPU acceleration. Performance increases on the order of 10X to 100X compared to a CPU are fairly typical for these types of codes. This has not gone unnoticed. “The kind of momentum around GPUs in this domain has been perhaps the biggest and most organic that we’ve seen,” says Sumit Gupta, NVIDIA’s senior product manager for the Tesla group. According to him, a lot of biologists have turned to GPUs without any prodding from NVIDIA. The reason for this, he thinks, is that for many small and moderate-sized bio-research projects, the costs and complexity of high performance computing have become a true pain point.

The life sciences sector is already one of the largest markets for high performance computing. In 2008, 29 percent of the supercomputing cycles on TeraGrid were dedicated to bioscience applications, while another 19 percent were running related codes in chemistry and material sciences research. In the commercial realm, HPC demand is being driven by pharmaceutical companies and the emerging genomics industry in their quest for better drugs and treatments. Analyst firm IDC estimates the bioscience vertical is worth well over $1.5 billion to HPC vendors and expanding at a CAGR of 2.6 percent . By the way, that CAGR figure is post-recession; in 2008 IDC was forecasting a growth rate of 9.3 percent. Nevertheless, the prospects for HPC in this sector are significant.

Drug discovery, in particular, is one area where HPC promises to both lower costs and accelerate the pace of research. Today the physical synthesis of drug compounds and the subsequent testing in high-throughput drug screening is both expensive and time consuming, typically representing a five-year R&D cycle. On modern HPC systems, much of this work can be simulated with molecular dynamics and quantum chemistry codes, in essence, replacing expensive labor and material costs with cheap CPU cycles.

Or GPU cycles, as the case may be. NVIDIA’s point with the Tesla Bio Workbench is that GPUs can make computational bioscience a much less expensive proposition than ever before. Because of the data parallel computational capabilities of the modern graphics processor, for many science applications a GPU-equipped workstation can replace a small CPU cluster, while a moderate-sized GPU cluster can stand in for a high-end supercomputer. This lowers up-front hardware costs, energy use over the life of the system, and datacenter space.

For example, a small simulation of the satellite tobacco mosaic virus (STMV) virus using NAMD, a molecular dynamics code for biomolecular simulations, can be performed on a modern 16-CPU cluster based on quad-core x86 technology. But according to NVIDIA’s Gupta, a 4-GPU workstation with a CUDA-version of NAMD will outperform that cluster, and with just a fraction of the power consumption. From the individual researcher’s point of view “anything that keeps the job on the workstation is good,” says Gupta.

Of course, larger simulations require more computational muscle than a workstation can provide. But since these codes tend to scale very nicely, a GPU cluster is the natural path up. “The key to acceptance here is going to be the fact that it’s easy to simulate large molecules,” explains Gupta. “You don’t have to get time on a supercomputer, because that’s too restricting.” For a drug company, that means every researcher can have a GPU workstation for their own small experiments and can share a GPU cluster when they need to run a larger problem.

Commercial products resulting from GPU-powered computational biology have yet to appear. At this point the use of these methods for drug discovery at pharmaceutical companies is sporadic. And given the length of clinical trials that must follow the drug design and discovery process, Gupta thinks we probably won’t begin to hear of success stories for another five years or so. For NVIDIA, the immediate challenge is to convince the biotech industry that these GPU computational tools and platforms are ready now.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penalties to HPC applications. Even as these patches are rolled o Read more…

By Pete Beckman

Intel Touts Silicon Spin Qubits for Quantum Computing

February 14, 2018

Debate around what makes a good qubit and how best to manufacture them is a sprawling topic. There are many insistent voices favoring one or another approach. Referencing a paper published today in Nature, Intel has offe Read more…

By John Russell

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

HPE Extreme Performance Solutions

Safeguard Your HPC Environment with the World’s Most Secure Industry Standard Servers

Today’s organizations operate in an environment with ever-evolving threats, and in order to protect themselves they must continuously bolster their security strategy. Hewlett Packard Enterprise (HPE) and Intel® are addressing modern security challenges with the world’s most secure industry standard servers powered by the latest generation of Intel® Xeon® Scalable processors. Read more…

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended to make it easier, faster and cheaper to train and run machi Read more…

By Doug Black

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penal Read more…

By Pete Beckman

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

The Food Industry’s Next Journey — from Mars to Exascale

February 12, 2018

Global food producer and one of the world's leading chocolate companies Mars Inc. has a unique perspective on the impact that exascale computing will have on the food industry. Read more…

By Scott Gibson, Oak Ridge National Laboratory

Singularity HPC Container Start-Up – Sylabs – Emerges from Stealth

February 8, 2018

The driving force behind Singularity, the popular HPC container technology, is bringing the open source platform to the enterprise with the launch of a new vent Read more…

By George Leopold

Dell EMC Debuts PowerEdge Servers with AMD EPYC Chips

February 6, 2018

AMD notched another EPYC processor win today with Dell EMC’s introduction of three PowerEdge servers (R6415, R7415, and R7425) based on the EPYC 7000-series p Read more…

By John Russell

‘Next Generation’ Universe Simulation Is Most Advanced Yet

February 5, 2018

The research group that gave us the most detailed time-lapse simulation of the universe’s evolution in 2014, spanning 13.8 billion years of cosmic evolution, is back in the spotlight with an even more advanced cosmological model that is providing new insights into how black holes influence the distribution of dark matter, how heavy elements are produced and distributed, and where magnetic fields originate. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

SC17: Singularity Preps Version 3.0, Nears 1M Containers Served Daily

November 1, 2017

Just a few months ago about half a million jobs were being run daily using Singularity containers, the LBNL-founded container platform intended for HPC. That wa Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This