Extreme Computational Biology at SC13: An Interview with Dr. Klaus Schulten

By Nicole Hemsoth

November 18, 2013

According to Dr. Klaus Schulten from the University of Illinois, the molecular dynamics and visualization programs NAMD and VMD, which serve over 300,000 registered users in many fields of biology and medicine, are pushing the limits of extreme scale computational biology. Schulten says these programs can operate on a wide variety of hardware and offer new inroads to medical discovery.

Dr. Schulten is among several invited speakers at the SC13 event in Denver and will be offering a deep dive presentation on extreme scale computational biology as powered by NAMD and VAMD tomorrow (Tuesday) at 10:30 a.m.

In addition to outlining NAMD and VMD development over the last several years that led to the programs’ extreme performance on Blue Waters, Titan and Stampede, the talk will also shed light on how these fields and programs are enabled by petascale computing. Further, Schulten will put this in further future hardware context by describing ongoing efforts in power-conscientious NAMD and VMD computing on the ARM and GPU processors needed for utilizing the next generation of computers will be discussed as well.

The following is a brief interview that highlights some key features of his talk.

HPCwire: Can you describe the growth of NAMD and VMD and give us a sense of how these developments have helped computational biology evolve?

Klaus Schulten: NAMD and VMD are programs that permit you to simulate very large biomolecules and effectively taking on the role of a computational microscope—you simulate these molecules and thereby you visualize them. You know their properties from chemistry and biochemistry; you know their structures from biology. Then, just like you simulate a Boeing before you actually build it, you simulate a molecule in the computer to optimize it.

The difference between us and others building similar programs is that we designed the program for parallel computers and for modern software and computer science concepts from the get-go. That meant designing software that went on clusters and then later on parallel computers built around clusters.

That was all until about 2006 when the National Science Foundation decided to invest in a large computer that was a hundred times larger than could be foreseen otherwise, called the petascale computer. We wanted to take advantage of this huge power increase—but not just because we wanted to be 100 times more powerful in what we could simulate, but rather we realized all along that all of our simulations were too small, meaning that a living cell is made of millions and millions of molecules that form associations that cooperate, and we needed to understand how these proteins worked together rather than worked by themselves.

With this big computer we wanted to explore how the molecules of life associate into structures and then cooperate, and this is exactly what we achieved. We solved the structure of the HIV virus, which made of way over 1,000 proteins that form a capsid, and we can now describe it atom-by-atom. Without petascale computing that would have been impossible.

Achieving this meant using the computer in two ways. On one side, we made the computer part of the experiment, literally. When you want to see a virus for a traditional experiment, you must have the physical virus on-hand. But just as Boeing can simulate an airplane, and we can simulate many molecules that you find in living cells, bypassing the physical study and making the computer an integral part of the experiment itself.

So we got data from different kinds of experiments via sources such as crystallography and electron microscopy, and then integrated them into one picture of the virus that gave us a view of the virus at the level of the atom. We could then test it in the second step. Finally we could take this model and simulate it in the computer, carrying out the world’s largest simulation ever done I think even to this point.

At this point we have reached our goal—we could show that the structure is stable to simulate in the computer and could look at its physical properties—but now of course comes the question of what we learned from it.

First we resolved structure atom-by-atom because we wanted to make the container of the virus, the so-called “capsid,” a target of drug treatment. That requires that we know the chemistry of that target, because when you deal with drugs that are molecules, you need to know both sides of direct treatment in chemical detail: you need to know the drug, of course, (very small molecules that are pretty straightforward,) and you need to know the target, which in this case is a huge system of over 1,000 proteins, and each protein itself in a big molecule containing itself several tens of thousands of atoms.

Once we applied drugs to our computational virus then we learned that the drugs most likely work very differently than we thought—we found that the HIV virus is in a way more dangerous and intelligent than we thought.

HIV is like a con artist that that smuggles itself into the cell then persuades the cell to help the virus infect it. Otherwise it’s not at all easy to infect a cell: the virus has to put its own genetic material into the nucleus of the cell, where the living cell has its genetic material, which is so difficult because the nucleus is very protected and very well organized against this kind of intrusion. But the virus talks a cell into helping it to get its genes into the nucleus.

And it is this cooperation that is acted upon by antiviral drugs.

And so now we have the stage of this drama: on the enormous surface of the virus, which is made up of over a thousand proteins, the virus recruits proteins from the infected cell to help it in its vicious strategy to get the virus’ genes inside. That is where antiviral drugs apparently interfere with this coordination with the cell.

HPCwire: I know at the end of your talk you plan on closing with the direction for using ARM and GPU cores to further this. Can you speak to that angle?

Klaus Schulten: We are one of the technology centers funded by the National Institutes of Health. We’re called the Center for Macromolecular Modeling and Bioinformatics and we’ve received funding for 23 years now and we will receive funding for five more years. The task of the center is to make the absolute best computing technology available to biomedical researchers in the United States.

And our task, since we have shown that we through our research that we can use the modern computing technology (particularly parallel computing) extremely well, is that we not only use it for our own research but that we also make it available for others. Our goal is to be as good or better that the physicists in using computing technology for the benefit of our particular scientific community, which in our case is biomedical research.

I think we are doing very, very well because our software runs extremely effectively on the biggest computers in the world. But it was the same software from the laptop to these big computers, so the individual researcher can learn it on his or her laptop and use it all the way to the big machine. In the same way, if we develop for the big machine it trickles down quickly back to the small machine.

Our task is now to utilize this technology constantly. So from 2006 to 2012 we have been working on making petascale computing possible. We focused on making these programs capable of simulating very, very large systems, hundreds of times larger than before, and also on analyzing and visualizing the results. This meant working on two fronts: on the front of the actual simulation that’s done by the program NAMD, and on the front of the visualization.

Now this is old. Of the new technologies going on, we think that the upcoming technology for the next generations of computers will be ARM chips, which we’ve been very successful in integrating.

But one factor that has never before been so important is the use of power. Now we not only adopt the new generation of chips for our software perhaps two, three years before the first time any scientists outside of our own group will use it, but we have to power profile all of our algorithms and all of our computational strategies.

Before the only thing that counted was how fast we compute. Now the talk is of scalability and making bigger models that effectively make use of bigger machines, and the talk in the lab is constantly about power profiling. Where can we cut corners in power consumption? What new computational strategies should we adopt? So the issue of power consumption is coming into our development work.

HPCwire: So there really aren’t a lot of on-base supercomputers at all, so where and how are you testing these ideas?

Klaus Schulten: First, are the smartphones and tablets. Our priority is to support the software that puts demands on machine so that through a smartphone or tablet you have another input device, an extra monitor or extra output device. But we’re already now well on our way to have the entire programs run on tablets and smartphones.

That went through pretty well, but the main problem is that you have a very, very small monitor, so you must develop a new user interface and that takes time and created a bottleneck for when we can release our software on these devices.

The main point, however, is that computers will be built from these kinds of chips that people expect, and then these kinds of chips will be made available in a form that you can put them in for other interactions and use them for computing. For that moment we will also be ready.

We’ve learned that these are very intelligent chips that can handle power issues in a much more flexible way, which enabled us to add dimension to our computational strategy that we never had before: a totally new culture to prepare us for the next generation of computing.

HPCwire: Let’s talk briefly about what GPUs have lended to computational biology in general.

Klaus Schulten: They were a tremendous benefit because they go two directions. The first is they make very powerful computing possible in the lab for a much less money. It’s very cost-effective computing, and very powerful computing. So the kind of calculations that until just about two, three years ago required a $50,000 computer can be done now with a few-thousand-dollar GPU cluster or even a single GPU board.

In the other direction, many smaller calculations are being made possible through GPUs. We were very early in demonstrating this with our first GPU extended paper, but today many labs work on it very well.

So that is the poor man’s powerful computer, which has been essential in proliferating the methodology and the culture of computing within the biomedical community.

Finally we come to accelerators, which is where Cray has played such a large role, particularly in boosting the speed of Titan and Blue Waters by above a factor of two. So too we can often be better—we are still battling to get more power out of the GPUs. But the effect is that what we expected at first to gain from these computers is doubled or now tripled.

And that of course is when the power of the computer is delineating the scientific frontier. When all of the sudden you can go twice as far, reaching twice as fast into new territory, that’s a huge scientific advancement. That’s what GPUs made possible.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Royalty-free stock illustration ID: 1938746143

MosaicML, Led by Naveen Rao, Comes Out of Stealth Aiming to Ease Model Training

October 15, 2021

With more and more enterprises turning to AI for a myriad of tasks, companies quickly find out that training AI models is expensive, difficult and time-consuming. Finding a new approach to deal with those cascading challenges is the aim of a new startup, MosaicML, that just came out of stealth... Read more…

NSF Awards $11M to SDSC, MIT and Univ. of Oregon to Secure the Internet

October 14, 2021

From a security standpoint, the internet is a problem. The infrastructure developed decades ago has cracked, leaked and been patched up innumerable times, leaving vulnerabilities that are difficult to address due to cost Read more…

SC21 Announces Science and Beyond Plenary: the Intersection of Ethics and HPC

October 13, 2021

The Intersection of Ethics and HPC will be the guiding topic of SC21's Science & Beyond plenary, inspired by the event tagline of the same name. The evening event will be moderated by Daniel Reed with panelists Crist Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

AWS Solution Channel

Cost optimizing Ansys LS-Dyna on AWS

Organizations migrate their high performance computing (HPC) workloads from on-premises infrastructure to Amazon Web Services (AWS) for advantages such as high availability, elastic capacity, latest processors, storage, and networking technologies; Read more…

Eni Returns to HPE for ‘HPC4’ Refresh via GreenLake

October 13, 2021

Italian energy company Eni is upgrading its HPC4 system with new gear from HPE that will be installed in Eni’s Green Data Center in Ferrera Erbognone (a province in Pavia, Italy), and delivered “as-a-service” via H Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Royalty-free stock illustration ID: 1938746143

MosaicML, Led by Naveen Rao, Comes Out of Stealth Aiming to Ease Model Training

October 15, 2021

With more and more enterprises turning to AI for a myriad of tasks, companies quickly find out that training AI models is expensive, difficult and time-consuming. Finding a new approach to deal with those cascading challenges is the aim of a new startup, MosaicML, that just came out of stealth... Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

Eni Returns to HPE for ‘HPC4’ Refresh via GreenLake

October 13, 2021

Italian energy company Eni is upgrading its HPC4 system with new gear from HPE that will be installed in Eni’s Green Data Center in Ferrera Erbognone (a provi Read more…

The Blueprint for the National Strategic Computing Reserve

October 12, 2021

Over the last year, the HPC community has been buzzing with the possibility of a National Strategic Computing Reserve (NSCR). An in-utero brainchild of the COVID-19 High-Performance Computing Consortium, an NSCR would serve as a Merchant Marine for urgent computing... Read more…

UCLA Researchers Report Largest Chiplet Design and Early Prototyping

October 12, 2021

What’s the best path forward for large-scale chip/system integration? Good question. Cerebras has set a high bar with its wafer scale engine 2 (WSE-2); it has 2.6 trillion transistors, including 850,000 cores, and was fabricated using TSMC’s 7nm process on a roughly 8” x 8” silicon footprint. Read more…

What’s Next for EuroHPC: an Interview with EuroHPC Exec. Dir. Anders Dam Jensen

October 7, 2021

One year after taking the post as executive director of the EuroHPC JU, Anders Dam Jensen reviews the project's accomplishments and details what's ahead as EuroHPC's operating period has now been extended out to the year 2027. Read more…

University of Bath Unveils Janus, an Azure-Based Cloud HPC Environment

October 6, 2021

The University of Bath is upgrading its HPC infrastructure, which it says “supports a growing and wide range of research activities across the University.” Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Leading Solution Providers

Contributors

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make i Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire