Visit additional Tabor Communication Publications
October 21, 2005
Researchers at Los Alamos National Laboratory have set a world's record by performing the first million-atom computer simulation in biology. Using the "Q Machine" supercomputer, Los Alamos scientists have created a molecular simulation of the cell's protein-making structure, the ribosome.
The project, simulating 2.64 million atoms in motion, is thought to be more than six times larger than any biological simulations performed to date.
The ribosome is the ancient molecular factory responsible for synthesizing proteins in all organisms. Using the new tool, the Los Alamos team led by Kevin Sanbonmatsu is the first to observe the entire ribosome in motion at atomic detail. This first simulation of the ribosome offers a new method for identifying potential antibiotic targets for such diseases as anthrax. Until now, only static, snapshot structures of the ribosome have been available.
Sanbonmatsu posits that this technique offers a powerful new tool for understanding molecular machines and improving the efficacy of antibiotics. Antibiotic drugs are less than one one-thousandth the size of the ribosome and act like a monkey-wrench in the machinery of the cell. Such drugs diffuse into the most critical sites of this molecular machine and grind the inner working of the ribosome to a halt.
"Designing drugs based on only static structures of the ribosome might be akin to intercepting a missile knowing only the launch location and the target location with no radar information," Sanbonmatsu said. "Our simulations enable us to map out the path of the missile's trajectory. The methods and implications lie at the interface between biochemistry, computer science, molecular biology, physics, structural biology and materials science. I believe the results serve as a proof-of-principle for materials scientists, chemists and physicists performing similar simulations of artificial molecular machines in the emerging field of nano-scale information processing.
Sanbonmatu's study focuses on decoding, the essential phase during protein synthesis within the cell wherein information transfers from RNA to protein, completing the information flow specified by Francis Crick in 1958 and known as the Central Dogma of Molecular Biology. "The ribosome is, in fact, a nano-scale computer and is very much analogous to the 'CPU' of the cell," he said.
The ribosome is so fundamental to life that many portions of this molecular machine are identical in every organism ever genetically sequenced. In developing the project, the team identified a corridor inside the ribosome that the transfer RNA must pass through for the decoding to occur, and it appears to be constructed almost entirely of universal bases, implying that it is evolutionarily ancient.
The corridor represents a new region of the ribosome containing a variety of potential new antibiotic targets. The simulations also reveal that the essential translating molecule, transfer RNA, must be flexible in two places for decoding to occur, furthering the growing belief that transfer RNA is a major player in the machine-like movement of the ribosome. The simulation also sets the stage for future biochemical research into decoding by identifying 20 universally conserved ribosomal bases important for accommodation, as well as a new structural gate, which may act as a control mechanism during transfer RNA selection.
The multi-million-atom simulation was run on 768 of the "Q" machine's 8,192 available processors. Sanbonmatsu worked to develop the simulation with Chang- Shung Tung of Los Alamos, as well as Simpson Joseph of the University of California at San Diego. Funding for the research was provided by the National Institutes of Health, Los Alamos National Laboratory's research and development fund, and support from the Laboratory's Institutional Computing Project.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.