Exascale Advocates Stand on Nuclear Stockpiles

By Nicole Hemsoth

May 23, 2013

When it comes to investment in scientific research, the U.S. government tends to have an open ear for new ideas. However, in this time of tight budgets and heightened national security, federal coffers tend to have looser locks when there is a threat situation—whether that is global competitiveness or the safety and security of the nation.

According to a group of leading voices in high performance computing who gathered before yesterday’s U.S. Subcommittee on Energy, all of these national commodities are at stake without sustained investment in exascale systems.

While exascale funding hearings are nothing new, yesterday’s appeal struck a different chord, harmonizing with the urgency of ensuring U.S. nuclear capabilities—a note that has been resonating in headlines lately.

Instead of pitching the “big science” projects that lack a direct call to action, the threat of enroaching dominance from China and others, internal security, continued economic viability, and even the ability to predict tornado paths (a top news item during yesterday’s hearings following a devastating F5 in Oklahoma) took center stage, pushing exascale into the light of a requirement versus another expensive scientific endeavor.

Dr. Roscoe Giles, Chairman of the Advanced Scientific Computing Advisory Committee; Dr. Rick Stevens, Associate Director for Computing, Environment and Life Sciences at Argonne; Dona Crawford, Associate Director for Computation at Lawrence Livermore; and Dr. Dan Reed, VP of Research and Economic Development at the University of Iowa, all weighed in on various, expected components of exascale’s future (architecture, power/cooling, memory, etc.) before ringing the urgency alarm.

The hearing’s purpose was to examine draft legislation as it relates to the Department of Energy’s goals to build an exascale system. While the scientific payload of exascale was an important topic, the real meat, particularly when the floor was opened for questions, was how exascale will fit into larger national security goals, including nuclear stockpile stewardship—a rather familiar subject in the context of historical HPC funding.

The government has a $465.59 million proposal for FY 2014 in their hands to fund the DOE’s Office of Science Advanced Scientific Computing Research program, which will help spearhead U.S exascale efforts. Additionally, the National Nuclear Security Administration (NNSA) is requesting a tick over $400 million for its Advanced Simulation and Computing programs, which will help the U.S. maintain the safety and viability of its nuclear weapons stockpile without active underground or small on-ground tests.

If the Advanced Simulation and Computing Program rings a bell, it’s because it was an original part of the initial DOE Stockpile Stewardship and Management plan, which took the dirt and grit out of the physical testing process of nukes and plugged the possibilities into supercomputers and new instruments instead. Since even the youngest nuclear devices in the U.S. shed are 20 years old, a lot of testing needs to be done to see how they will react under the stresses of aging in terms of stability and viability should the unfortunate need arise.

From the beginning, this Stewardship and associated Simulation and Computing program pulled in funding—breathing new life into research endeavors at a number of national labs, most notably Sandia, Lawrence Livermore and Los Alamos. It also kicked funds into the private technology sector by default. To avoid a tangent, take this redirect to an analysis of some of the program’s strengths and weaknesses in terms of the computational horsepower.

Using the arsenal of current tools, the NNSA continuously assesses each nuclear weapon to certify its reliability and to detect or anticipate any potential problems that may come about as a result of aging.  All weapon types in the U.S. nuclear stockpile require routine maintenance, periodic repair, replacement of limited life components, surveillance (a thorough examination of a weapon)—all tasks that Crawford and colleagues say require exaflop-capable resources.

In short, this convincing approach worked in the 1990s when modeling and simulation capabilities were increasing rapidly—but the question is whether or not even that call to action for exascale’s value will be enough to add the required $400 million-level of urgency. Combined, however, with the dramatic and timely issues of nuclear threats pointed at allies—not to mention our competitive stew has cooled on multiple industrial and economic fronts—this appeal might carry more weight than it would have even this time last year.

As Dona Crawford explained, it is now the use of exascale systems that represents the only way to truly understand how to make sure the U.S. nuclear stockpile is safe, secure and in top condition. The same argument that propelled a great deal of investment into tech companies back in the 1990s when the NNSA looked to simulations and supercomputing to carry the stewardship load.

“Computing is the integrating element of maintaining the safety, security and reliability of our nuclear weapons stockpile without returning to underground tests,” said Crawford. “By integrating element, I mean that right now we have old test data, above-ground small test data, a lot of theory and some new models,” but that these cannot be used effectively unless scientists have access to far higher-fidelity simulations.

Even without using exascale to ensure nuclear stockpile safety and security, the side effect of lagging investment is a dwindling of our competitive prowess.

When asked why the U.S. doesn’t look to more international collaboration to reach its exascale ambitions, Dr. Stevens said that this makes sense on the software level, especially since so many large-scale systems use the same open source packages that are then pushed out to the community. However, he argued that it would not be suitable for us to share resources on the hardware front, pointing to what might happen if we were to trust our secure operations to run on hardware built in China.

The competitive threat wasn’t difficult for the speakers to tease apart for the committee—they pointed to investments in China and Japan toward exascale, making it clear that these were not insignificant funding efforts.  

Dan Reed made the argument that we are facing an uncertain future in HPC as other nations are making critical investments in supercomputing, noting, “Global leadership isn’t a birthright.” Even if the nuclear stockpile can make do with its current level of petascale capabilities, winning a silver, bronze—or even no medal in the exascale race itself presents a bevy of potential problems.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

RIKEN Post-K Supercomputer Named After Japan’s Tallest Peak

May 23, 2019

May 23 -- RIKEN President Hiroshi Matsumoto announced that the successor to the K computer will be named Fugaku, another name for Mount Fuji, which is the tallest mountain peak in Japan. Supercomputer Fugaku, developed b Read more…

By Tiffany Trader

Cray’s Emerging Market & Technology Director Arti Garg Peers Around HPC/AI Corner

May 23, 2019

In her position as emerging market and technology director at Cray, Arti Garg doesn't just have a front-row seat to the future of computing, she plays an active role in making that future happen. Key to Garg's role is understanding how deep learning scientists are using state-of-the-art HPC infrastructures and figuring out how to push those limits further. Read more…

By Tiffany Trader

Combining Machine Learning and Supercomputing to Ferret out Phishing Attacks

May 23, 2019

The relentless ingenuity that drives cyber hacking is a global engine that knows no rest. Anyone with a laptop and run-of-the-mill computer smarts can buy or rent a phishing kit and start attacking – or it can be done Read more…

By Doug Black

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

For decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Who’s Driving Your Car?

Delivering a fully autonomous driving (AD) vehicle remains a key priority for both manufacturers and technology firms (“firms”). However, passenger safety is now a top-of-mind concern due in great part, to fatalities resulting from driving tests over the past years. Read more…

TACC’s Upgraded Ranch Data Storage System Debuts New Features, Exabyte Potential

May 22, 2019

There's a joke attributed to comedian Steven Wright that goes, "You can't have everything. Where would you put it?" Users of advanced computing can likely relate to this. The exponential growth of data poses a steep challenge to efforts for its reliable storage. For over 12 years, the Ranch system at the Texas Advanced Computing Center... Read more…

By Jorge Salazar, TACC

Cray’s Emerging Market & Technology Director Arti Garg Peers Around HPC/AI Corner

May 23, 2019

In her position as emerging market and technology director at Cray, Arti Garg doesn't just have a front-row seat to the future of computing, she plays an active role in making that future happen. Key to Garg's role is understanding how deep learning scientists are using state-of-the-art HPC infrastructures and figuring out how to push those limits further. Read more…

By Tiffany Trader

Combining Machine Learning and Supercomputing to Ferret out Phishing Attacks

May 23, 2019

The relentless ingenuity that drives cyber hacking is a global engine that knows no rest. Anyone with a laptop and run-of-the-mill computer smarts can buy or re Read more…

By Doug Black

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often Read more…

By Doug Black and Tiffany Trader

HPE to Acquire Cray for $1.3B

May 17, 2019

Venerable supercomputer pioneer Cray Inc. will be acquired by Hewlett Packard Enterprise for $1.3 billion under a definitive agreement announced this morning. T Read more…

By Doug Black & Tiffany Trader

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

CCC Offers Draft 20-Year AI Roadmap; Seeks Comments

May 14, 2019

Artificial Intelligence in all its guises has captured much of the conversation in HPC and general computing today. The White House, DARPA, IARPA, and Departmen Read more…

By John Russell

Cascade Lake Shows Up to 84 Percent Gen-on-Gen Advantage on STAC Benchmarking

May 13, 2019

The Securities Technology Analysis Center (STAC) issued a report Friday comparing the performance of Intel's Cascade Lake processors with previous-gen Skylake u Read more…

By Tiffany Trader

Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark

May 13, 2019

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, Read more…

By Doug Black

Cray, AMD to Extend DOE’s Exascale Frontier

May 7, 2019

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Read more…

By Tiffany Trader

Graphene Surprises Again, This Time for Quantum Computing

May 8, 2019

Graphene is fascinating stuff with promise for use in a seeming endless number of applications. This month researchers from the University of Vienna and Institu Read more…

By John Russell

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

Deep Learning Competitors Stalk Nvidia

May 14, 2019

There is no shortage of processing architectures emerging to accelerate deep learning workloads, with two more options emerging this week to challenge GPU leader Nvidia. First, Intel researchers claimed a new deep learning record for image classification on the ResNet-50 convolutional neural network. Separately, Israeli AI chip startup Hailo.ai... Read more…

By George Leopold

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Intel Launches Cascade Lake Xeons with Up to 56 Cores

April 2, 2019

At Intel's Data-Centric Innovation Day in San Francisco (April 2), the company unveiled its second-generation Xeon Scalable (Cascade Lake) family and debuted it Read more…

By Tiffany Trader

Announcing four new HPC capabilities in Google Cloud Platform

April 15, 2019

When you’re running compute-bound or memory-bound applications for high performance computing or large, data-dependent machine learning training workloads on Read more…

By Wyatt Gorman, HPC Specialist, Google Cloud; Brad Calder, VP of Engineering, Google Cloud; Bart Sano, VP of Platforms, Google Cloud

Nvidia Claims 6000x Speed-Up for Stock Trading Backtest Benchmark

May 13, 2019

A stock trading backtesting algorithm used by hedge funds to simulate trading variants has received a massive, GPU-based performance boost, according to Nvidia, Read more…

By Doug Black

In Wake of Nvidia-Mellanox: Xilinx to Acquire Solarflare

April 25, 2019

With echoes of Nvidia’s recent acquisition of Mellanox, FPGA maker Xilinx has announced a definitive agreement to acquire Solarflare Communications, provider Read more…

By Doug Black

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This