Visit additional Tabor Communication Publications
February 05, 2009
"A Decade of Discovery" hails supercomputers used in Parkinson's disease breakthrough
Feb. 4 -- A new book by the U.S. Department of Energy (DOE) commemorating the agency's most significant scientific breakthroughs of the last decade includes the groundbreaking research by scientists at the San Diego Supercomputer Center to better understand the molecular mechanisms that cause Parkinson's disease.
Called A Decade of Discovery, the new publication covers a wide array of transformational science and engineering research, divided into three broad categories: energy and the environment, national security, and life and physical science. The hardcover book highlights research done by the DOE's 17 national laboratories, such as the development of new, cleaner and sustainable fuels; new anti-terrorism technologies to protect our troops and citizens; measures being taken to maintain a safe and reliable nuclear weapons stockpile; and better ways to detect and treat major diseases such as cancer, Parkinson's, and other illnesses.
In the area of life and physical science, the DOE book highlights the work of a team led by Igor Tsigelny, a scientist at the San Diego Supercomputer Center (SDSC) and the Department of Chemistry and Biochemistry at the University of California, San Diego collaborating with the Argonne National Laboratory of the DOE.
Using supercomputer resources both at SDSC and Argonne, Tsigelny and his colleagues were able to elucidate for the first time the concrete molecular mechanism behind Parkinson's disease, providing new insights into the illness and a promising avenue of treatment. The findings have also provided the tools for other researchers to aid in the study of other disorders associated with abnormally aggregated proteins, including Alzheimer's and prion diseases.
While the tremors, rigid posture, and shuffling gait of Parkinson's disease have been associated for decades with the die-off of dopamine-producing neurons in the brain, scientists did not know until recently how these neurons, riddled with suspicious protein clumps, are affected by the disease.
"We couldn't have done this without the supercomputers," Tsigelny is quoted as saying in the DOE book. "They gave us the power to track enough molecules over time to see the interactions we were looking for."
Specifically, the protein clumps in Parkinson's disease consist primarily of a protein called alpha synuclein (aS). For many years a prime target in Parkinson's research, aS has resisted conventional protein analysis because it has an ever-changing shape.
Tsigelny decided to study aS in motion using a computer modeling approach known as molecular dynamics. His new tool, called MAPAS (Membrane-Associated Protein Assessments), harnesses the power of supercomputers to study how this protein contacts cell membranes.
Tsigelny's virtual view of pore formation enabled him to identify the protein-binding sites on aS proteins in a high level of detail. Further studies revealed that beta synuclein, a brain protein very similar to aS, appeared to inhibit aS molecules from linking together. Working with Eliezer Masliah, a professor of Neurosciences and Pathology at UC San Diego's School of Medicine, Tsigelny and his team, which included Mark Miller and Yuriy Sharikov, used these findings to develop a compound capable of clogging interactions of aS molecules halting their aggregation and subsequent pore formation.
Tsigelny used his computer models to design and test the possible compounds that can prevent aggregation. Masliah's later laboratory tests are very promising. "If the studies would proceed in current direction it is quite possible that these will lead to the first drug to treat the cause of Parkinson's disease instead of the symptoms," according to Tsigelny. The treatment would offer hope to the more than 1 million people who are living with Parkinson's disease today.
As an organized research unit of UC San Diego, the San Diego Supercomputer Center is a national leader in creating and providing cyberinfrastructure for data-intensive research. Cyberinfrastructure refers to an accessible and integrated network of computer-based resources and expertise, focused on accelerating scientific inquiry and discovery. SDSC is a founding member of the national TeraGrid, the nation's largest open scientific discovery infrastructure.
Source: San Diego Supercomputer Center
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.