ECP Pagoda Project Rolls Out First Software Libraries

November 2, 2017

Nov. 2 — Just one year after the U.S. Department of Energy’s (DOE) Exascale Computing Program (ECP) began funding projects to prepare scientific applications for exascale supercomputers, the Pagoda Project — a three-year ECP software development program based at Lawrence Berkeley National Laboratory — has successfully reached a major milestone: making its open source software libraries publicly available as of September 30, 2017.

Led by Scott B. Baden, Group Lead of the Computer Languages and Systems Software (CLaSS) Group within Berkeley Lab’s Computational Research Division, the Pagoda Project’s libraries are designed to support lightweight global address space communication for exascale applications. The libraries take advantage of the Partitioned Global Address Space (PGAS) model to emulate large, distributed shared memories. By employing this model, which allows researchers to treat the physically separate memories of each supercomputer node as one address space, the Pagoda libraries will be able to leverage available global address hardware support to significantly reduce the communication costs of moving data — often a performance bottleneck in large-scale scientific applications, Baden explained.

“Our job is to ensure that the exascale applications reach key performance parameters defined by the DOE,” he added.

Thus this first release of the software is as functionally complete as possible, Baden emphasized, covering a good deal of the specification released last June. “We need to quickly determine if our users, in particular our ECP application developer partners, are satisfied,” he said. “If they can give us early feedback, we can avoid surprises later on.”

GASNet-EX and UPC++

The Pagoda software stack comprises a communication substrate layer, GASNet-Ex, and a productivity layer, UPC++. GASNet-Ex is a communication interface that provides language-independent, low-level networking for PGAS languages such as UPC and Coarray Fortran, the UPC++ library and for the Legion Programming Language. UPC++ is a C++ interface for application programmers that creates “friendlier” PGAS abstractions above GASNet-Ex’s communication services.

“GASNet-Ex, which has been around for over 15 years and is being enhanced to make it more versatile and performant in the exascale environment, is a library intended for developers of tools that are in turn used to develop applications,” Baden explained. “It operates at the network hardware level, which is more challenging to program than at the productivity layer.” The GASNet-Ex effort is led by Pagoda co-PI Paul Hargrove and was originally designed by Dan Bonachea, who jointly develops the software. Both are members of CLaSS.

As the productivity layer, UPC++ sits at a slightly higher level over GASNet-Ex, in a form appropriate for applications programmers. The goal of this layer is to impose minimal overheads in exchange for hiding considerable idiosyncratic detail, so users are satisfied with the benefits obtained by increased productivity.

Over the past year, the Pagoda team worked closely with several Berkeley Lab partners to develop applications and application frameworks, including the Adaptive Mesh Refinement Co-Design Center (AMReX), Sparse Solvers (ECP AD project) and ExaBiome (ECP AD Project). They also worked with several industry partners, including IBM, NVIDIA, HPE and Cray, and over the next few months will be meeting with all of the major vendors who are vying to build the first exascale computer or the components that will go into those computers.

“We are part of a large community of ECP developers,” Baden said. “And the ECP wants to deploy a software stack, a full set of tools, as an integrated package that will enable them to ensure that the pieces are compatible, that they will all work together. I am fortunate to be working with such a talented team that is highly motivated to deliver a vital component of the ECP software stack.” This team includes other members of CLaSS—Steve Hofmeyr and Amir Kamil (at the University of Michigan)—as well John Bachan, Brian van Straalen and Mathias Jacquelin. Bryce Lelbach, now with NVIDIA, also made early contributions.

Now that they are publicly available, the Pagoda libraries are expected to be used by other ECP efforts and supercomputer users in general to meet the challenges posed not only by the first-generation exascale computers but by today’s petascale systems as well.

“Much of the ECP software and programming technology can be leveraged across multiple applications, both within ECP and beyond,” said Kathy Yelick, Associate Lab Director for Computing Sciences at Berkeley Lab, in a recent interview with HPCwire. For example, AMReX, which was launched last November and recently announced its own first milestone, released its new framework to support the development of block-structured AMR algorithms, and at least five of the ECP application projects are using AMR to efficiently simulate fine-resolution features, Yelick noted.

For the remaining two years of the Pagoda project, the team will be focused on application integration and performance enhancements that adeptly leverage low-level hardware support, Baden noted.


Source: Berkeley Lab

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

ISC 2024 Takeaways: Love for Top500, Extending HPC Systems, and Media Bashing

May 23, 2024

The ISC High Performance show is typically about time-to-science, but breakout sessions also focused on Europe's tech sovereignty, server infrastructure, storage, throughput, and new computing technologies. This round Read more…

HPC Pioneer Gordon Bell Passed Away

May 22, 2024

Legendary computer scientist Gordon Bell passed away last Friday at his home in Coronado, CA. He was 89. The New York Times has a nice tribute piece. A long-time pioneer with Digital Equipment Corp, he pushed hard for de Read more…

ISC 2024 — A Few Quantum Gems and Slides from a Packed QC Agenda

May 22, 2024

If you were looking for quantum computing content, ISC 2024 was a good place to be last week — there were around 20 quantum computing related sessions. QC even earned a slide in Kathy Yelick’s opening keynote — Bey Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Core42 Is Building Its 172 Million-core AI Supercomputer in Texas

May 20, 2024

UAE-based Core42 is building an AI supercomputer with 172 million cores which will become operational later this year. The system, Condor Galaxy 3, was announced earlier this year and will have 192 nodes with Cerebras Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's latest weapon in the AI battle with GPU maker Nvidia and clou Read more…

ISC 2024 Takeaways: Love for Top500, Extending HPC Systems, and Media Bashing

May 23, 2024

The ISC High Performance show is typically about time-to-science, but breakout sessions also focused on Europe's tech sovereignty, server infrastructure, storag Read more…

ISC 2024 — A Few Quantum Gems and Slides from a Packed QC Agenda

May 22, 2024

If you were looking for quantum computing content, ISC 2024 was a good place to be last week — there were around 20 quantum computing related sessions. QC eve Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of Read more…

The Future of AI in Science

May 15, 2024

AI is one of the most transformative and valuable scientific tools ever developed. By harnessing vast amounts of data and computational power, AI systems can un Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

ISC 2024 Keynote: High-precision Computing Will Be a Foundation for AI Models

May 15, 2024

Some scientific computing applications cannot sacrifice accuracy and will always require high-precision computing. Therefore, conventional high-performance c Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Leading Solution Providers

Contributors

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire