Six Exascale PathForward Vendors Selected; DoE Providing $258M

By John Russell

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD, Cray, Hewlett Packard Enterprise (HPE), IBM, Intel, and NVIDIA. The Department of Energy (DoE) will provide $258 million while the vendors must contribute at least 40 percent of the total costs bringing the total investment to at least $430 million. Under the recently accelerated ECP timetable, the U.S. expects to field one or two exascale machines in 2021 followed by others in the 2023 timeframe.

Few details about the specific technology projects being undertaken by the PathForward companies were revealed, nor was how the money will be divided among the vendors. Nevertheless the awards mark an important milestone in ECP efforts noted ECP director Paul Messina.

Speaking at a press pre-briefing yesterday, Messina said the PathForward investment was critical to moving hardware technology forward at an accelerated pace. “By that I mean beyond what the vendor or manufacture roadmaps currently have scheduled. [It also helps bridge] the gap between open ended architecture R&D and advanced product development as focused on the delivery of the first of a kind capable exascale systems,” said Messina.

The ECP program has many elements. PathForward awards are intended to drive the hardware technology research and development required for exascale. Applications and software technology development fall under different ECP programs and have a different budget. The actual procurement of the eventual exascale systems is also done differently and funded separately; the individual national labs and facilities which will house and operate the computers purchase their individual systems directly. It now seems likely the first two exascale computing sites will be Argonne National Laboratory and Oak Ridge National Laboratory based on spikes in their facilities budget in the proposed FY 2018 DoE budget.

Much of today’s announcement and yesterday’s briefing had been expected. Messina did provide confirmation that Aurora, the planned successor to Mira supercomputer at Argonne National Laboratory, is likely to be pushed out or changed. “At present I believe that the Aurora system contract is being reviewed for potential changes that would result in a subsequent system in a time different timeframe from the original Aurora system. Since it’s just early negotiations I don’t think we can be any more specific that,” he said.

It would have been interesting to get a clearer sense of a few specific PathForward technology projects but none were discussed. Much of the work is predictably under NDA. Messina identified what are by now the familiar challenges facing the task of achieving exascale computing: massive parallelism, memory and storage, reliability, and energy consumption. “Specifically the work funded by PathForward has been strategically aligned to address those key challenges through development of innovative memory architectures, higher speed interconnect, improved reliability of systems, and approaches for increasing computer power and capability without prohibitive increases in energy demand,” he said.

Messina noted vendor progress in PathForward would be closely monitored: “Firms will be required to deliver final reports on the outcomes of their research but it’s very important to note this is a co-design effort with other [ECP] activities and we will be having frequent, formally scheduled intermediate reviews every few months. The funding for each of the vendors is based on specific work packages, and as each work package is delivered which would be an investigation on a particular aspect of the research. So it isn’t that we send the money and wait three years and get an answer.”

Messina also emphasized the labs (eventual systems owners) and the ECP app/software teams would be deeply involved in co-design and work product assessment. “Application developers and systems software developers, software library developers, for example, will participate in those evaluations,” he said.

All of the vendors emphasized expectations to incorporate results of their exascale research into their commercial offerings. William Dally, chief scientist and SVP of research at NVIDIA noted this is NVIDIA’s the sixth DoE R&D contract and that previous research contracts led to major innovations, “such as energy efficient circuits and the NVLink interconnect being incorporated into our Maxwell, Pascal, and Volta GPUs.”

In the official DoE release, Secretary of Energy Rick Perry is quoted, “Continued U.S. leadership in high performance computing is essential to our security, prosperity, and economic competitiveness as a nation,” said Secretary Perry. “These awards will enable leading U.S. technology firms to marshal their formidable skills, expertise, and resources in the global race for the next stage in supercomputing—exascale-capable systems.”

It does seem as if increasing tension in the international community is firing up regional and national competitive zeal in pursuit of exascale. Here’s an excerpt from today’s official release:

“Exascale systems will be at least 50 times faster than the nation’s most powerful computers today, and global competition for this technological dominance is fierce. While the U.S. has five of the 10 fastest computers in the world, its most powerful — the Titan system at Oak Ridge National Laboratory — ranks third behind two systems in China. However, the U.S. retains global leadership in the actual application of high performance computing to national security, industry, and science.”

Pressed on how the U.S. stacked up against internationals rivals, particularly China, in the race to exascale, Messina said, “Our current plan is to have delivery of at least one, not necessarily one, in 2021. I would not characterize that as to catch up with China. We do know of course that China has indicated they plan to have at least one exascale system in 2020 but we, for example, do not know whether that system will be a peak exaflops system versus what we are planning to deliver. A concise answer [to your question is we plan to deliver], at least one system in 2021 and another if not in 2021, then in 2022.”

See HPCwire article for a broader overview of the ECP, Messina Update: The US Path to Exascale in 16 slides.

The six selected PathForward vendors all seek to leverage their various expertise and ongoing R&D efforts. Senior executives and research staff from each company participated in yesterday’s briefing but very few specific details were offered, perhaps understandably so. Here are snippets from their comments.

  • AMD. “Exascale is important because it pushes industry to innovate more and faster. While the focus of the PathForward program is on HPC the benefits are applicable across a wide range of computing platforms and cloud services as well as computational domains such as machine learning and data science,” said Alan Lee, corporate VP for research and advanced development. He positioned AMD as the only company with both x86 and GPU offerings and expertise in melding the two.
  • Cray. “We care very little about peak performance. We are committed to delivering sustained performance on real workloads,” said Steve Scott, SVP and chief technology officer. Cray intends to explore new advances in node-level and system-level technologies and architectures for exascale systems. “[We’ll focus] on building systems that are highly flexible and upgradeable over time in order to take advantage of various [emerging] processor and storage technology.”
  • HPE. HPE plans to leverage its several years of R&D into memory driven computing technologies – think The Machine project. “PathForward will significantly accelerate the pace of our development and allow us to leverage activities and investments such as The Machine. [W]e will accelerate R&D into areas such as silicon photonics, balanced systems architecture, and software [for example],” said Mike Vildibill, VP, advanced technologies, exascale development & federal R&D programs.
  • IBM. “[We believe] future computing is going to be very data centric and we are focused very much on building solutions that allow complex analytics and modeling and simulation to actually be used on very large data sets. We see the major technical challenges to an exascale design to be power efficiency, reliability, scalability, and programmability and we feel very strongly those challenges need to be addressed in the context of a full system design effort,” said Jim Sexton, IBM Fellow and director of data centric systems, IBM Research.
  • Intel. “Exascale from Intel’s perspective is not only about high performance computing. It’s also about artificial intelligence and data analytics. We think these three are all part of the solution and need to be encompassed. So HPC is continuing to grow. It’s really established itself as one of the three pillars of scientific discovery, along with theory and experiment. AI is quickly growing and probably the fastest growing segment of computing as we find ways to efficiently use data to find relationships to make accurate predictions,” said Al Gara, Intel Fellow, data center group chief architect, exascale systems. He singles out managing and reducing power consumption as one area Intel will work on.
  • NIVDIA. “This contract will focus on critical areas including energy efficiency, GPU architectures and resilience, and our finding will certainly be incorporating to future generations of GPUs after the Volta generation,” said Dally, Ph.D. “It also allows us to focus on improving the resilience of our GPUs which allows them to be applied at greater scale than in the past.”

At least for the moment, the expectation is work done during this PathForward contract will be sufficient to support ECP. Messina said, “At present, we are not [planning a second PathForward RFP for the 2021 systems.]” in response to a question.

Link to DoE press release: https://exascaleproject.org/path-nations-first-exascale-supercomputers-pathforward/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire