Six Exascale PathForward Vendors Selected; DoE Providing $258M

By John Russell

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD, Cray, Hewlett Packard Enterprise (HPE), IBM, Intel, and NVIDIA. The Department of Energy (DoE) will provide $258 million while the vendors must contribute at least 40 percent of the total costs bringing the total investment to at least $430 million. Under the recently accelerated ECP timetable, the U.S. expects to field one or two exascale machines in 2021 followed by others in the 2023 timeframe.

Few details about the specific technology projects being undertaken by the PathForward companies were revealed, nor was how the money will be divided among the vendors. Nevertheless the awards mark an important milestone in ECP efforts noted ECP director Paul Messina.

Speaking at a press pre-briefing yesterday, Messina said the PathForward investment was critical to moving hardware technology forward at an accelerated pace. “By that I mean beyond what the vendor or manufacture roadmaps currently have scheduled. [It also helps bridge] the gap between open ended architecture R&D and advanced product development as focused on the delivery of the first of a kind capable exascale systems,” said Messina.

The ECP program has many elements. PathForward awards are intended to drive the hardware technology research and development required for exascale. Applications and software technology development fall under different ECP programs and have a different budget. The actual procurement of the eventual exascale systems is also done differently and funded separately; the individual national labs and facilities which will house and operate the computers purchase their individual systems directly. It now seems likely the first two exascale computing sites will be Argonne National Laboratory and Oak Ridge National Laboratory based on spikes in their facilities budget in the proposed FY 2018 DoE budget.

Much of today’s announcement and yesterday’s briefing had been expected. Messina did provide confirmation that Aurora, the planned successor to Mira supercomputer at Argonne National Laboratory, is likely to be pushed out or changed. “At present I believe that the Aurora system contract is being reviewed for potential changes that would result in a subsequent system in a time different timeframe from the original Aurora system. Since it’s just early negotiations I don’t think we can be any more specific that,” he said.

It would have been interesting to get a clearer sense of a few specific PathForward technology projects but none were discussed. Much of the work is predictably under NDA. Messina identified what are by now the familiar challenges facing the task of achieving exascale computing: massive parallelism, memory and storage, reliability, and energy consumption. “Specifically the work funded by PathForward has been strategically aligned to address those key challenges through development of innovative memory architectures, higher speed interconnect, improved reliability of systems, and approaches for increasing computer power and capability without prohibitive increases in energy demand,” he said.

Messina noted vendor progress in PathForward would be closely monitored: “Firms will be required to deliver final reports on the outcomes of their research but it’s very important to note this is a co-design effort with other [ECP] activities and we will be having frequent, formally scheduled intermediate reviews every few months. The funding for each of the vendors is based on specific work packages, and as each work package is delivered which would be an investigation on a particular aspect of the research. So it isn’t that we send the money and wait three years and get an answer.”

Messina also emphasized the labs (eventual systems owners) and the ECP app/software teams would be deeply involved in co-design and work product assessment. “Application developers and systems software developers, software library developers, for example, will participate in those evaluations,” he said.

All of the vendors emphasized expectations to incorporate results of their exascale research into their commercial offerings. William Dally, chief scientist and SVP of research at NVIDIA noted this is NVIDIA’s the sixth DoE R&D contract and that previous research contracts led to major innovations, “such as energy efficient circuits and the NVLink interconnect being incorporated into our Maxwell, Pascal, and Volta GPUs.”

In the official DoE release, Secretary of Energy Rick Perry is quoted, “Continued U.S. leadership in high performance computing is essential to our security, prosperity, and economic competitiveness as a nation,” said Secretary Perry. “These awards will enable leading U.S. technology firms to marshal their formidable skills, expertise, and resources in the global race for the next stage in supercomputing—exascale-capable systems.”

It does seem as if increasing tension in the international community is firing up regional and national competitive zeal in pursuit of exascale. Here’s an excerpt from today’s official release:

“Exascale systems will be at least 50 times faster than the nation’s most powerful computers today, and global competition for this technological dominance is fierce. While the U.S. has five of the 10 fastest computers in the world, its most powerful — the Titan system at Oak Ridge National Laboratory — ranks third behind two systems in China. However, the U.S. retains global leadership in the actual application of high performance computing to national security, industry, and science.”

Pressed on how the U.S. stacked up against internationals rivals, particularly China, in the race to exascale, Messina said, “Our current plan is to have delivery of at least one, not necessarily one, in 2021. I would not characterize that as to catch up with China. We do know of course that China has indicated they plan to have at least one exascale system in 2020 but we, for example, do not know whether that system will be a peak exaflops system versus what we are planning to deliver. A concise answer [to your question is we plan to deliver], at least one system in 2021 and another if not in 2021, then in 2022.”

See HPCwire article for a broader overview of the ECP, Messina Update: The US Path to Exascale in 16 slides.

The six selected PathForward vendors all seek to leverage their various expertise and ongoing R&D efforts. Senior executives and research staff from each company participated in yesterday’s briefing but very few specific details were offered, perhaps understandably so. Here are snippets from their comments.

  • AMD. “Exascale is important because it pushes industry to innovate more and faster. While the focus of the PathForward program is on HPC the benefits are applicable across a wide range of computing platforms and cloud services as well as computational domains such as machine learning and data science,” said Alan Lee, corporate VP for research and advanced development. He positioned AMD as the only company with both x86 and GPU offerings and expertise in melding the two.
  • Cray. “We care very little about peak performance. We are committed to delivering sustained performance on real workloads,” said Steve Scott, SVP and chief technology officer. Cray intends to explore new advances in node-level and system-level technologies and architectures for exascale systems. “[We’ll focus] on building systems that are highly flexible and upgradeable over time in order to take advantage of various [emerging] processor and storage technology.”
  • HPE. HPE plans to leverage its several years of R&D into memory driven computing technologies – think The Machine project. “PathForward will significantly accelerate the pace of our development and allow us to leverage activities and investments such as The Machine. [W]e will accelerate R&D into areas such as silicon photonics, balanced systems architecture, and software [for example],” said Mike Vildibill, VP, advanced technologies, exascale development & federal R&D programs.
  • IBM. “[We believe] future computing is going to be very data centric and we are focused very much on building solutions that allow complex analytics and modeling and simulation to actually be used on very large data sets. We see the major technical challenges to an exascale design to be power efficiency, reliability, scalability, and programmability and we feel very strongly those challenges need to be addressed in the context of a full system design effort,” said Jim Sexton, IBM Fellow and director of data centric systems, IBM Research.
  • Intel. “Exascale from Intel’s perspective is not only about high performance computing. It’s also about artificial intelligence and data analytics. We think these three are all part of the solution and need to be encompassed. So HPC is continuing to grow. It’s really established itself as one of the three pillars of scientific discovery, along with theory and experiment. AI is quickly growing and probably the fastest growing segment of computing as we find ways to efficiently use data to find relationships to make accurate predictions,” said Al Gara, Intel Fellow, data center group chief architect, exascale systems. He singles out managing and reducing power consumption as one area Intel will work on.
  • NIVDIA. “This contract will focus on critical areas including energy efficiency, GPU architectures and resilience, and our finding will certainly be incorporating to future generations of GPUs after the Volta generation,” said Dally, Ph.D. “It also allows us to focus on improving the resilience of our GPUs which allows them to be applied at greater scale than in the past.”

At least for the moment, the expectation is work done during this PathForward contract will be sufficient to support ECP. Messina said, “At present, we are not [planning a second PathForward RFP for the 2021 systems.]” in response to a question.

Link to DoE press release: https://exascaleproject.org/path-nations-first-exascale-supercomputers-pathforward/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from its predecessors, including the red-hot H100 and A100 GPUs. Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. While Nvidia may not spring to mind when thinking of the quant Read more…

2024 Winter Classic: Meet the HPE Mentors

March 18, 2024

The latest installment of the 2024 Winter Classic Studio Update Show features our interview with the HPE mentor team who introduced our student teams to the joys (and potential sorrows) of the HPL (LINPACK) and accompany Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the field was normalized for boys in 1969 when the Apollo 11 missi Read more…

Apple Buys DarwinAI Deepening its AI Push According to Report

March 14, 2024

Apple has purchased Canadian AI startup DarwinAI according to a Bloomberg report today. Apparently the deal was done early this year but still hasn’t been publicly announced according to the report. Apple is preparing Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimization algorithms to iteratively refine their parameters until Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimizat Read more…

PASQAL Issues Roadmap to 10,000 Qubits in 2026 and Fault Tolerance in 2028

March 13, 2024

Paris-based PASQAL, a developer of neutral atom-based quantum computers, yesterday issued a roadmap for delivering systems with 10,000 physical qubits in 2026 a Read more…

India Is an AI Powerhouse Waiting to Happen, but Challenges Await

March 12, 2024

The Indian government is pushing full speed ahead to make the country an attractive technology base, especially in the hot fields of AI and semiconductors, but Read more…

Charles Tahan Exits National Quantum Coordination Office

March 12, 2024

(March 1, 2024) My first official day at the White House Office of Science and Technology Policy (OSTP) was June 15, 2020, during the depths of the COVID-19 loc Read more…

AI Bias In the Spotlight On International Women’s Day

March 11, 2024

What impact does AI bias have on women and girls? What can people do to increase female participation in the AI field? These are some of the questions the tech Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Analyst Panel Says Take the Quantum Computing Plunge Now…

November 27, 2023

Should you start exploring quantum computing? Yes, said a panel of analysts convened at Tabor Communications HPC and AI on Wall Street conference earlier this y Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Training of 1-Trillion Parameter Scientific AI Begins

November 13, 2023

A US national lab has started training a massive AI brain that could ultimately become the must-have computing resource for scientific researchers. Argonne N Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire