Chief Strategist for the High Performance Computing Modernization Program Talks with SC23 Comms Team

June 1, 2023

June 1, 2023 — The SC23 Communications Team has conducted an interview with Dr. Roy Campbell, Chief Strategist for the Department of Defense (DoD) High Performance Computing (HPC) Modernization Program (HPCMP), a topical feature for Memorial Day week. The following is from their blog post.


This month, we talked to Dr. Roy Campbell, Chief Strategist for the Department of Defense (DoD) High Performance Computing (HPC) Modernization Program (HPCMP). The HPCMP is a Congress-mandated initiative with a budget of approximately $250 million annually, aimed at enhancing warfighter support using high-performance computing in essential research, development, test, and evaluation (RDT&E) projects. Dr. Campbell is focused on the creation of the program’s science, engineering, and software strategy. We covered a broad variety of topics in our chat with Dr. Campbell, ranging from how he got started at the DoD to how the DoD uses HPC in its warfighter modernization efforts to his take on the need for mentors in the HPC community.

Roy’s Path to Strategic Lead

If there’s one thing Dr. Roy Campbell doesn’t ask when he goes to work every morning, it’s “why?” After a career spanning nearly 30 years with the DoD, he has no doubt that the work he’s doing is meaningful, so he’s excited to go to work and take on big challenges related to modernizing the DoD’s HPC program every day, even if it often leads to compressed schedules at home.

Dr. Roy Campbell Chief Strategist, Department of Defense High Performance Computing Modernization Program (DoD HPCMP). Roy at AIAA.

Before his career arc tilted in a strategic direction, Campbell spent his early years in HPC working on networking and data storage solutions for the DoD Supercomputing Resource Center located in Vicksburg, Miss. “In 2002, I moved to the HPCMP’s center in Aberdeen, Md., where I learned to benchmark supercomputers,” explained Campbell. “And then in 2008, I moved to the HPCMP management office in D.C., where I served as the lead [Defense Research and Engineering (DREN) Program Manager] for the DoD’s research, development, test, and evaluation network. I also served as the HPCMP’s Deputy Director, Chief Technology Officer, and Chief Scientist.”

Today, as the HPCMP Chief Strategist, his main responsibility is formulating a strategy or vision for U.S. defense supercomputing. To do that, Campbell examines a wide range of organizational, financial, technological, geopolitical, scientific, and modernization trends. The end goal is to provide senior officials with an actionable blueprint for solving very difficult problems with supercomputers. “Our program was initially focused on defense grand challenges that contribute to scientific knowledge. Today, however, our goal is much broader. We also work to reduce the cost, schedule, and risk for a wide variety of military platforms,” he said.

Reducing Risks & Costs

One of HPCMP’s missions is to provide a virtual space to support the design and testing of jets, helicopters, ships, submarines, tanks, and antennas without building expensive prototypes or putting the public at risk. “Removing bad designs and refining good ones early through virtual testing saves an extraordinary amount of time and money,” said Campbell. For example, he noted that troubleshooting for the Sikorsky CH-53K King Stallion heavy-lift cargo helicopter helped isolate why the copter engines were highly inefficient whenever they flew near the ground. Simulations created by software developers at the Naval Research Lab revealed that when the helicopters were at low altitudes, engine exhaust was looping back into the engine and straining them. “Minimally, that probably saved more than $100 million in testing and other sundries,” said Campbell.

HPC simulations have also been integral to improving and streamlining training for the fighter jet refueling process with the Boeing KC-46 Pegasus, leading to both significant cost savings and improved safety. Campbell also noted that Congress is now even pushing the DoD to explore how it could use supercomputing modeling and simulations to reduce or eliminate the need for live fire testing near costly assets, such as Ford Class Aircraft Carriers.

CH-53K Helicopter Simulation
Jet Refueling with Boeing KCC-46 Pegasus

For Campbell, some of the most exciting work his team is pursuing is around melding hard and soft sciences into single calculations: “Our biggest challenge right now is supporting advanced scenario analyses that incorporate both hard and soft sciences in a single calculation to produce actionable advice for campaign planners. Hard sciences are generally based on physics. Soft sciences include the analysis of social media feeds, economics, population growth, and sentiment.”

Navigating Technology Shifts

When asked about the key hardware and software technologies being considered for the next-generation HPC systems within the DoD, Campbell noted that trends related to quantum and neuromorphic computing aren’t as relevant to defense supercomputing at the moment and likely won’t be for some time. “Today, as we face the end of Moore’s Law, the big shifts we are seeing include a transition away from monolithic general-purpose chips toward general-purpose chiplets and special-purpose monolithic chips,” explained Campbell. “The former affects defense supercomputing the most and may present future opportunities for designing chiplets specifically for defense use cases.”

“Today, as we face the end of Moore’s Law, the big shifts we are seeing include a transition away from monolithic general-purpose chips toward general-purpose chiplets and special-purpose monolithic chips.”

He also noted that determining which technologies are best for DoD systems is getting harder since performance analysis and benchmarking is much trickier than it used to be. “In the early days, performance analysis was a lot more fun than it is today because we could use a systematic process that made it easy to clearly understand what the performance meant. Today, systems are incredibly complicated, and testing has become more of an art than science because the configuration and compute layers are so adjustable. In some ways, I miss the days when we didn’t have things like a dynamically adjustable clock for the CPU and other configuration features that can vary with time,” he said.

Collaboration & Community Building

Campbell, who has three degrees in electrical engineering with an emphasis on communications systems and information theory, was hooked on supercomputing once he understood how the low-level components of a supercomputer impacted science and engineering disciplines. He notes that without a mentor, it would have been nearly impossible to gain traction in his early career. “By far, Dr. Larry Davis had the greatest influence over my career. I met him in 2003 in Washington D.C. He gave me an opportunity as a junior member of his supercomputer benchmarking team,” said Campbell. Given his experience, he laments that the push for greater efficiency has driven many organizations to remove mentoring opportunities: “Our community suffers from the aftermath of this short-sighted metric.”

Although he is the HPCMP Chief Strategist, Campbell said that collaboration remains key to his growth, and he is standing on the shoulders of giants. “Over the years, Ms. Thuc Hoang [the Director of DOE NNSA’s ASC Program] has been a great collaborator, and I learn a lot from her on a regular basis. The scale, breadth, and complexity of her program is impressive. And, the foresight of Hon. John J. Young, Jr., the 2007-2009 Undersecretary for Acquisition, Logistics, and Technology is critical to my role. He championed a significant investment in supercomputer-based design software for jets, helicopters, ships, submarines, tanks, and antennas that allows the DoD HPCMP to provide significant value to the DoD and the U.S.”


Source: SC23 Communications Team

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire