Exascale Advocates Stand on Nuclear Stockpiles

By Nicole Hemsoth

May 23, 2013

When it comes to investment in scientific research, the U.S. government tends to have an open ear for new ideas. However, in this time of tight budgets and heightened national security, federal coffers tend to have looser locks when there is a threat situation—whether that is global competitiveness or the safety and security of the nation.

According to a group of leading voices in high performance computing who gathered before yesterday’s U.S. Subcommittee on Energy, all of these national commodities are at stake without sustained investment in exascale systems.

While exascale funding hearings are nothing new, yesterday’s appeal struck a different chord, harmonizing with the urgency of ensuring U.S. nuclear capabilities—a note that has been resonating in headlines lately.

Instead of pitching the “big science” projects that lack a direct call to action, the threat of enroaching dominance from China and others, internal security, continued economic viability, and even the ability to predict tornado paths (a top news item during yesterday’s hearings following a devastating F5 in Oklahoma) took center stage, pushing exascale into the light of a requirement versus another expensive scientific endeavor.

Dr. Roscoe Giles, Chairman of the Advanced Scientific Computing Advisory Committee; Dr. Rick Stevens, Associate Director for Computing, Environment and Life Sciences at Argonne; Dona Crawford, Associate Director for Computation at Lawrence Livermore; and Dr. Dan Reed, VP of Research and Economic Development at the University of Iowa, all weighed in on various, expected components of exascale’s future (architecture, power/cooling, memory, etc.) before ringing the urgency alarm.

The hearing’s purpose was to examine draft legislation as it relates to the Department of Energy’s goals to build an exascale system. While the scientific payload of exascale was an important topic, the real meat, particularly when the floor was opened for questions, was how exascale will fit into larger national security goals, including nuclear stockpile stewardship—a rather familiar subject in the context of historical HPC funding.

The government has a $465.59 million proposal for FY 2014 in their hands to fund the DOE’s Office of Science Advanced Scientific Computing Research program, which will help spearhead U.S exascale efforts. Additionally, the National Nuclear Security Administration (NNSA) is requesting a tick over $400 million for its Advanced Simulation and Computing programs, which will help the U.S. maintain the safety and viability of its nuclear weapons stockpile without active underground or small on-ground tests.

If the Advanced Simulation and Computing Program rings a bell, it’s because it was an original part of the initial DOE Stockpile Stewardship and Management plan, which took the dirt and grit out of the physical testing process of nukes and plugged the possibilities into supercomputers and new instruments instead. Since even the youngest nuclear devices in the U.S. shed are 20 years old, a lot of testing needs to be done to see how they will react under the stresses of aging in terms of stability and viability should the unfortunate need arise.

From the beginning, this Stewardship and associated Simulation and Computing program pulled in funding—breathing new life into research endeavors at a number of national labs, most notably Sandia, Lawrence Livermore and Los Alamos. It also kicked funds into the private technology sector by default. To avoid a tangent, take this redirect to an analysis of some of the program’s strengths and weaknesses in terms of the computational horsepower.

Using the arsenal of current tools, the NNSA continuously assesses each nuclear weapon to certify its reliability and to detect or anticipate any potential problems that may come about as a result of aging.  All weapon types in the U.S. nuclear stockpile require routine maintenance, periodic repair, replacement of limited life components, surveillance (a thorough examination of a weapon)—all tasks that Crawford and colleagues say require exaflop-capable resources.

In short, this convincing approach worked in the 1990s when modeling and simulation capabilities were increasing rapidly—but the question is whether or not even that call to action for exascale’s value will be enough to add the required $400 million-level of urgency. Combined, however, with the dramatic and timely issues of nuclear threats pointed at allies—not to mention our competitive stew has cooled on multiple industrial and economic fronts—this appeal might carry more weight than it would have even this time last year.

As Dona Crawford explained, it is now the use of exascale systems that represents the only way to truly understand how to make sure the U.S. nuclear stockpile is safe, secure and in top condition. The same argument that propelled a great deal of investment into tech companies back in the 1990s when the NNSA looked to simulations and supercomputing to carry the stewardship load.

“Computing is the integrating element of maintaining the safety, security and reliability of our nuclear weapons stockpile without returning to underground tests,” said Crawford. “By integrating element, I mean that right now we have old test data, above-ground small test data, a lot of theory and some new models,” but that these cannot be used effectively unless scientists have access to far higher-fidelity simulations.

Even without using exascale to ensure nuclear stockpile safety and security, the side effect of lagging investment is a dwindling of our competitive prowess.

When asked why the U.S. doesn’t look to more international collaboration to reach its exascale ambitions, Dr. Stevens said that this makes sense on the software level, especially since so many large-scale systems use the same open source packages that are then pushed out to the community. However, he argued that it would not be suitable for us to share resources on the hardware front, pointing to what might happen if we were to trust our secure operations to run on hardware built in China.

The competitive threat wasn’t difficult for the speakers to tease apart for the committee—they pointed to investments in China and Japan toward exascale, making it clear that these were not insignificant funding efforts.  

Dan Reed made the argument that we are facing an uncertain future in HPC as other nations are making critical investments in supercomputing, noting, “Global leadership isn’t a birthright.” Even if the nuclear stockpile can make do with its current level of petascale capabilities, winning a silver, bronze—or even no medal in the exascale race itself presents a bevy of potential problems.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

HPC in Life Sciences Part 1: CPU Choices, Rise of Data Lakes, Networking Challenges, and More

February 21, 2019

For the past few years HPCwire and leaders of BioTeam, a research computing consultancy specializing in life sciences, have convened to examine the state of HPC (and now AI) use in life sciences. Without HPC writ lar Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized silicon designs catered toward general-purpose cloud computing Read more…

By Tiffany Trader

The Internet of Criminal Things—Trust in the Gods but Verify!

February 20, 2019

“Are we under attack?” asked Professor Elmarie Biermann of the Cyber Security Institute during the recent South African Centre for High Performance Computing’s (CHPC) National Conference in Cape Town. A quick show Read more…

By Elizabeth Leake, STEM-Trek

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

The Perils of Becoming Trapped in the Cloud

Terms like ‘open systems’ have been bandied about for decades. While modern computer systems are relatively open compared to their predecessors, there are still plenty of opportunities to become locked into proprietary interfaces. Read more…

Machine Learning Takes Heat for Science’s Reproducibility Crisis

February 19, 2019

Scientists are raising red flags about the accuracy and reproducibility of conclusions drawn by machine learning frameworks. Among the remedies are developing new ML systems that can question their own predictions, show Read more…

By George Leopold

HPC in Life Sciences Part 1: CPU Choices, Rise of Data Lakes, Networking Challenges, and More

February 21, 2019

For the past few years HPCwire and leaders of BioTeam, a research computing consultancy specializing in life sciences, have convened to examine the state of HP Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from th Read more…

By Ken Strandberg

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

Iowa ‘Grows Its Own’ to Fill the HPC Workforce Pipeline

February 13, 2019

The global workforce that supports advanced computing, scientific software and high-speed research networks is relatively small when you stop to consider the magnitude of the transformative discoveries it empowers. Technical conferences provide a forum where specialists convene to learn about the latest innovations and schedule face-time with colleagues from other institutions. Read more…

By Elizabeth Leake, STEM-Trek

Trump Signs Executive Order Launching U.S. AI Initiative

February 11, 2019

U.S. President Donald Trump issued an Executive Order (EO) today launching a U.S Artificial Intelligence Initiative. The new initiative - Maintaining American L Read more…

By John Russell

Celebrating Women in Science: Meet Four Women Leading the Way in HPC

February 11, 2019

One only needs to look around at virtually any CS/tech conference to realize that women are underrepresented, and that holds true of HPC. SC hosts over 13,000 H Read more…

By AJ Lauer

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This