Visit additional Tabor Communication Publications
April 27, 2011
New NNSA video highlights solutions to non-nuclear challenges
WASHINGTON, April 27 -- All this week, the National Nuclear Security Administration (NNSA) is highlighting its advanced supercomputing capabilities with a week of features demonstrating the science and technology work done by the Advanced Simulation and Computing (ASC) program. As part of that effort, NNSA launched a new and improved ASC webpage.
Today's feature includes a new video on the NNSA's website that describes the role that NNSA plays in non-nuclear research. While preparing NNSA's supercomputing platforms for use as part of the stockpile stewardship program, NNSA's laboratories often allow ground breaking research and analysis into a wide variety of non-nuclear issues. As a result, NNSA's supercomputers have provided the nation the tools to tackle a wide variety of national challenges.
For example, NNSA and Lawrence Livermore National Laboratory today announced that a team of computational physics and engineering experts have been using NNSA supercomputers to better understand the impact of space debris. Working in collaboration with Los Alamos National Laboratory and Sandia National Laboratories, the team developed a set of tools known as the Testbed for Space Situational Awareness (TESSA), they can simulate the position of objects in orbit and the detection of them by telescope and radar systems, helping to prevent a space disaster. In the future, the same technology can be used to enhance nuclear security by helping plan sensor operations and assessing the benefits of specific sensor systems, technologies, and data analysis techniques.
"NNSA's efforts to maintain the safety, security and effectiveness of the nuclear stockpile without underground testing have yielded solutions to some of the most challenging issues that face our country," said Don Cook, NNSA's Deputy Administrator for Defense Programs. "From space debris to medical work to climate change, even to understanding the damage that caused the breakup of the Space Shuttle Columbia, NNSA has been able to support many important issues that impact the nation while implementing President Obama's nuclear security agenda."
The development of the space debris modeling capability is one of many examples of the ways NNSA's supercomputers have enabled our laboratories to find solutions to broader national challenges.
At Los Alamos National Laboratory, ASC code is being used for medical physics. MCNPX, a general-purpose Monte Carlo radiation transport code for modeling the interaction of radiation with the things it comes into contact with, is ideally suited for use in medical applications because of the accuracy of its physics models, the unique set of clinically relevant features, and the responsive support provided by the developers and the user community. LANL has used MCNP code to calculate dose distributions for brain tumor therapy at the MIT Nuclear Reactor. MCNP is a 3D, parallel, internationally respected, particle transport code that is used in medical physics and radiation health protection.
Scientists were also able to help stabilize Roadrunner, an NNSA supercomputer at LANL, by running science-based applications before it was ready for use as part of the stockpile stewardship program. One application modeled HIV proteins, which led to a better understanding of how the AIDS virus replicates itself. That project could serve as the cornerstone to developing the first viable vaccine to protect people from HIV.
Also at LANL, research to understand the potential influenza pandemic sweeping across a continent has been conducted. This research, supported by the Department of Homeland Security and LANL supercomputer time provided by the Institutional Computing and ASC programs, led to a cover article in the Proceedings of the National Academy of Sciences in April 2006 titled "Mitigation strategies for pandemic influenza in the United States."
Researchers at Sandia National Laboratories played a key role in helping NASA determine the cause of the space shuttle Columbia disaster. Sandia analyses and experimental studies supported the position that foam debris shed from the fuel tank and impacting the orbiter wing during launch was the most probable cause of the wing damage that led to the breakup the Columbia. Sandia researchers used a variety of internal and external computer codes to help in the analysis, including computational fluid dynamics analyses for the orbiter at various altitudes along the trajectory, heat transfer predictions, calculations of plumes that simulated hot gas entering the wing, and material-response calculations of possible damaged wing leading edge and tile materials.
Also, Sandia was recently selected as one of four institutions to develop new supercomputer prototype systems for the Defense Advanced Research Projects Agency (DARPA). To meet the increasing advanced computing needs for the Department of Defense, DARPA launched the Ubiquitous High Performance Computing (UHPC) program.
For more information on NNSA's supercomputing capabilities, visit the new and improved ASC webpage.
Established by Congress in 2000, NNSA is a semi-autonomous agency within the U.S. Department of Energy responsible for enhancing national security through the military application of nuclear science in the nation's national security enterprise. NNSA maintains and enhances the safety, security, reliability, and performance of the U.S. nuclear weapons stockpile without nuclear testing; reduces the global danger from weapons of mass destruction; provides the U.S. Navy with safe and effective nuclear propulsion; and responds to nuclear and radiological emergencies in the U.S. and abroad. Visit http://www.nnsa.energy.gov/ for more information.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.