Visit additional Tabor Communication Publications
September 29, 2006
Since human immunodeficiency virus (HIV) -- the virus that causes AIDS -- was first isolated in 1981, 25 million people have died of the disease. Worldwide it is estimated that there are currently more than 65 million adults and children infected with the virus. Nearly a quarter of a century on from identifying the virus we now have effective treatments. But they're expensive and, if doses are missed, the effects can be catastrophic. HIV, in short, continues to be one of the major threats to global health.
High performance computing (HPC) is playing a key role in coming up with an HIV vaccine. Even within a single infected patient the rapidly evolving HIV virus creates an enormous amount of data for potential analysis. This kind of statistical analysis requires randomization testing on an enormous scale, which in turn needs a computationally intensive approach to shorten the timescale between potentially ground-breaking medical observations and the arrival of effective treatments.
For the past three years, David Heckerman, senior researcher, and Carl Kadie, research software development engineer, both at Microsoft Research, have been applying their backgrounds in machine learning to the challenge of using high performance computing to pursue the ultimate AIDS vaccine design.
HPCwire: What are the primary goals of your work on HIV vaccine design?
Heckerman: Vaccine design has two key aims. We're working on the payload or what is called the "immunogen" of the vaccine -- the substance that provokes an immune response. The other important goal of vaccine design is the means of delivering the vaccine, or what is called the "vector."
The tricky part in designing the immunogen is the fact that the HIV virus mutates so rapidly. So pernicious is the virus that it contains an in-built protein that deliberately makes mistakes when it is replicated. By mutating rapidly, the virus is able to constantly escape attacks by our immune system.
Our plan is to identify the potential escape routes of the virus and to prime our immune systems to target the virus, regardless of which exit route it chooses.
HPCwire: Why is HPC particularly important when it comes to finding a cure for HIV?
Kadie: When HIV infects a new person it actually changes to throw off the mutations it developed in the previous person. The way it changes depends on the type of immune system you have. There are hundreds of different types of immune systems, leading to many different types of HIV. There are certain patterns of mutation that are repeating themselves across the human population around the world that are not obvious to the human eye -- thousands of human sequences to look at. Much of our work involves building simulations of how the virus mutates in response to attacks by the immune system. Our most accurate simulations are extremely computationally intensive, requiring HPC.
HPCwire: During his Supercomputing 2005 keynote, Bill Gates addressed the transformation resulting from the availability of massive amounts of real-world data from low-cost sensors. What opportunities and challenges does this scenario present in your work?
Heckerman: In our case, the real world sensors are the countless health workers scattered throughout the world -- especially in third-world areas where HIV is endemic -- who are monitoring the health of the local inhabitants, taking blood samples for experimentation purposes, and offering treatment whenever possible.
HPCwire: What kind of HPC cluster do you have in place to do your work?
Kadie: The simulations require massive amounts of computation -- sometimes as much as a CPU year of computation for a single run. Using Windows Compute Cluster Server 2003, we have racked up dozens of CPU years of results that are helping us further our understanding of the way HIV interacts with our immune system.
The HPC cluster is based on 25 IBM eServer 326 boxes, with two AMD Opteron processors per machine running at 2.6GHz. Of the eight new research programs we've got under way, six use .NET (C# and C++/CLI), one is in "R" and one in native C++.
HPCwire: In order to achieve the next big breakthrough, what advanced computing capabilities are on your wish list?
Kadie: We need to be able to grab more cycles from more computers in as painless a way possible. One of the key roadblocks here is trust. When moving beyond clusters to grids, we can no longer trust the machines on which we run our simulations, and the owners of those machines can't trust our software. A solution that makes computation seamless in this environment is critical.
HPCwire: How do you think your HIV vaccine design work will have progressed in five years time?
Heckerman: We're currently moving from using HPC to simulate the reactions to exposure to HIV in test tubes, to animal studies, particularly mice. Five years down the line we hope to be in the human phase, innoculating humans with an HIV vaccine simulating the -- hopefully effective -- response of their immune systems.
Our statistical analyses could also prove of potential benefit in looking at other viral infections where you see a lot of mutations. Hepatitis C, is an example where we might also be able to come up with an effective vaccine with the help of our HPC-based approach.
HPCwire: Taking a mile high view, what kind of the impact do you think your current work with HPC and computational statistics will have on the medical profession in general?
Heckerman: Experts are pointing to "personalized medicine" as the next breatkthrough in medical care. In the current medical paradigm, the patient becomes aware of certain symptoms and goes to the doctor. He or she is then prescribed treatments taught in medical schools -- treatments almost invariably based on effectiveness in the general population rather than effectiveness in the individual patient. While such treatments often work, sometimes they don't and can have side effects that cause as many problems as they solve.
Personalized medicine, in contrast, involves assessing an individual's genetic predisposition to a particular disease or treatments. With such information, diseases could be treated before exacting enormous personal and social costs, and treatments that cause harmful side effects in a particular individual could be avoided. The HPC methods that we have been applying to the study of HIV can be used to tease out the relationship between genes, disease, and treatment and thus provide a basis for personalized medicine.
Carl Kadie David Heckerman
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.