Visit additional Tabor Communication Publications
December 04, 2012
BARCELONA, Dec. 4 - Bioinformaticians at IMIM (Hospital del Mar Medical Research Institute) and UPF (Pompeu Fabra University) have used molecular simulation techniques to explain a specific step in the maturation of the HIV virions, i.e., how newly formed inert virus particles become infectious, which is essential in understanding how the virus replicates. These results, which have been published in the latest edition of PNAS, could be crucial to the design of future antiretrovirals.
HIV virions mature and become infectious as a result of the action of a protein called HIV protease. This protein acts like a pair of scissors, cutting the long chain of connected proteins that form HIV into individual proteins that will form the infectious structure of new virions. According to the researchers of the IMIM-UPF computational biophysics group, "One of the most intriguing aspects of the whole HIV maturation process is how free HIV protease, i.e. the 'scissors protein,' appears for the first time, since it is also initially part of the long poly-protein chains that make up new HIV virions."
Using ACEMD a software for molecular simulations and a technology known as GPUGRID.net, Gianni De Fabritiis' group has demonstrated that the first "scissors proteins" can cut themselves out from within the middle of these poly-protein chains. They do this by binding one of their connected ends (the N-terminus) to their own active site and then cutting the chemical bond that connects them to the rest of the chain. This is the initial step of the whole HIV maturation process. If the HIV protease can be stopped during the maturation process, it will prevent viral particles, or virions, from reaching maturity and, therefore, from becoming infectious.
This work was performed using GPUGRID.net, a voluntary distributed computing platform that harnesses the processing power of thousands of NVIDIA GPU accelerators from household computers made available by the public for research purposes. It's akin to accessing a virtual supercomputer. One of the benefits of GPU acceleration is that it provides computing power that is around 10 times higher than that generated by computers based on CPUs alone. It reduces research costs accordingly by providing a level computational power that previously was only available on dedicated, multi-million dollar supercomputers.
Researchers use this computing power to process large numbers of data and generate highly complex molecular simulations. In this specific case, thousands of computer simulations have been carried out, each for hundreds of nanoseconds (billionths of a second) for a total of almost a millisecond.
According to researchers, this discovery in the HIV maturation process provides an alternative approach in the design of future pharmaceutical products based on the use of these new molecular mechanisms. For now, this work provides a greater understanding of a crucial step in the life cycle of HIV, a virus that directly attacks and weakens the human immune system, making it vulnerable to a wide range of infections, and which affects millions of people around the world.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.