Visit additional Tabor Communication Publications
March 12, 2012
Since its inception in 1943, the Los Alamos National Lab (LANL) has participated in the design of nuclear weapons, a technology that has unfortunately threatened the planet on more than one occasion. But recently the lab has been looking into an application that would use of nukes to save mankind. Late last year LANL developed a computer model of how a nuclear detonation could destroy planet-killing asteroids headed toward Earth.
The model in question has been stuck with the rather understated designation of “asteroid mitigation calculation.” In a video discussing the work, Robert Weaver, a research scientist at LANL, says asteroids are simply a collection of rocks that are held together by gravity, so a well-placed nuclear blast could be enough to break up even a very large asteroid into harmless pieces.
The software attempts to create an accurate simulation of a 500-meter long asteroid as a 1-megaton nuclear explosion is delivered to its surface. In the simulation, the asteroid’s internal structure is represented as a collection of granite rocks, and when the nuclear blast detonates at the surface, it breaks up the mass through a kind of domino effect.
“What we’re looking at are calculations that perform real hydrodynamics on these objects in order to understand whether we can use an energy source of this magnitude to really disrupt this asteroid and prevent the hazard to the entire Earth,” says Weaver. According to him, the simulation shows that a nuclear blast of that power would indeed “fully mitigate” the threat to Earth.
Modeling the reaction required a lot of computing horsepower, so the researchers turned to Cielo, a Cray-built supercomputer, rated at 1.1 Linpack petaflops. The machine consists of 8,944 dual-socket nodes and 286 TB of memory, and is powered by 8-core AMD 6136 Opteron CPUs. According to Weaver, the simulation was able to use 32,000 processors (although in this case he probably means cores given that Cielo only has 17,888 CPUs. Weaver noted the simulation he ran on the Cray super was unable to run on any previous machine he had access to at the lab.
Asteroids hitting the earth may seem like a far-fetched notion only realized in Hollywood disaster movies. However, a recent EarthSky report explains that an asteroid named 2012 DA14 will come alarmingly close to the Earth on February 15, 2013. Estimations pin the rock at missing the planet by only 17,000 miles. That is closer than the moon and some satellites. The report also goes on to predict a remote chance of impact by the same asteroid in 2020. Given that, Weaver’s work might be given a real-world test in the not-to-distant future.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.