Flux Supercomputing Workload Manager Hits Milestones in Advance of Supporting Exascale Science

By Scott Gibson

March 2, 2022

High-performance computing, or supercomputing, combined with new data-science approaches such as machine learning and artificial intelligence (AI) give scientists the ability to explore phenomena at levels of detail that would otherwise be impossible or impractical. This ranges from solving the most difficult physics calculations, to designing better drugs for cancer and COVID-19, to optimizing additive manufacturing, and more. Finding answers to those important and timely types of problems and others on supercomputing systems involves the coordination of numerous interconnected computational tasks through complex scientific workflows.

“Nowadays, many science and engineering disciplines require far more applications than ever before in their scientific workflows,” said Dong H. Ahn, a computer scientist at Lawrence Livermore National Laboratory’s (LLNL’s) Livermore Computing (LC). “In many cases, a single job needs to run multiple simulation applications at different scales along with data analysis, in situ visualization, machine learning, and AI.”

Scientific workflow requirements combined with hardware innovations—for example, extremely heterogeneous resources such as GPUs, multitiered storage, AI accelerators, quantum computing of various configurations, and cloud computing resources—are increasingly rendering traditional computing resource management and scheduling software incapable of handling the workflows and adapting to the emerging supercomputing architectures. As this challenging phenomenon has emerged, the LC in-house system software team has been situated at a good vantage point to observe it.

During the course of many years, the team has developed products to run and manage science simulation codes across multiple compute clusters. Among the fruit of their labor is Slurm, which is a workload manager used worldwide. The team realized, however, that today’s workflow management and resource scheduling challenges called for a fundamental rethinking of software design that would transcend conventional solutions.

Overcoming Conventional Limitations

“We all knew our goal was a giant undertaking,” Ahn said. “But our vision for next-gen manager software was compelling enough and well-received by our stakeholders within the Department of Energy [DOE] Advanced Simulation and Computing [ASC] program and then later, the Exascale Computing Project [ECP].”

Computer scientists at LLNL devised an open-source, modular, fully hierarchical software framework called Flux that manages and schedules computing workflows to use system resources more efficiently and provide results faster. Flux’s modular development model enables a rich and consistent API that makes it easy to launch Flux instances from within scripts. Fully hierarchical means that every Flux “job step” can be a full Flux instance, with the ability to schedule more job steps on its resources.

Flux offers the advantages of higher job throughput, better specialization of the scheduler, and portability to different computing environments, yet it manages complex workflows as simply as conventional, or traditional, ones.

Because traditional resource data models are largely ineffective to cope with computing resource heterogeneity, the Flux team established graph-based scheduling to manage complex combinations of extremely heterogeneous resources.

With a graph consisting of a set of vertices and edges, graph-based scheduling puts the relationships between resources on an equal footing, and complex scheduling can be expressed without changing the scheduler code. Consequently, one case in which Flux’s graph approach solves a critical scheduling need is the use of HPE’s Rabbit multi-tiered storage modules on the upcoming exascale-class supercomputer El Capitan at LLNL.

Flux users can push more work through the supercomputing system quicker and spin up their own personalized Flux instance in the system. Additionally, workflows that must run at different sites no longer need to code to each site-specific scheduler—they can instead code to Flux and rely on Flux to handle the nuances of each individual site.

The Flux development team, from top left: Thomas Scogland, Albert Chu, Tapasya Patki, Stephen Herbein, Mark Grondona, Becky Springmeyer, Christopher Moussa, Jim Garlick, Daniel Milroy, Clay England, Michela Taufer (Academic Co-PI), Ryan Day, Dong H. Ahn (PI), Barry Rountree, Zeke Morton, Jae-Seung Yeom, James Corbett. Credit: LLNL

The Impact of Flux

Flux has showcased its innovative characteristics by enabling COVID-19, cancer, and advanced manufacturing projects.

Using Flux, a highly scalable drug design workflow demonstrated the ability to expediently produce potential COVID-19 drug molecules for further clinical testing. The paper documenting that work was one of four finalists for the Gordon Bell Prize at the SC20 supercomputing conference.

In cancer research, simulating RAS protein interactions at the micromolecular-dynamics level is a critical aim of life science and biomedical researchers, because when mutated, RAS is implicated in one third of human cancers. Flux enabled the Multiscale Machine-Learned Modeling Infrastructure (MuMMI) project to successfully execute its complex scientific workflow to simulate RAS on the pre-exascale Sierra supercomputer at LLNL. MuMMI paves the way for a new genre of multiscale simulation for cancer research, coupling multiple scales using a hypothesis-driven selection process.

ECP’s Exascale Additive Manufacturing (ExaAM) project is focused on accelerating the widespread adoption of additive manufacturing by enabling routine fabrication of qualifiable metal parts. By incorporating Flux into a portion of their ExaConstit workflow, the ExaAM team demonstrated a 4× job throughput performance improvement.

“Flux allowed them to bundle their small jobs into a large IBM LSF [short for load sharing facility] resource allocation, and they were able to run them all together with only about five lines of changes in their script,” Ahn said.

To multiply such benefits to a wider range of ECP workflows, the Flux team has also played a major role in ECP ExaWorks project as one of the four founding members. The goal of ExaWorks is to build a Software Development Kit, or SDK, for workflows and a community around common APIs and open-source functional components that can be leveraged individually, or in combination, to create sophisticated dynamic workflows that can leverage exascale-class supercomputers.

Innovation and Collaboration

Flux won a 2021 R&D 100 Award in the Software/Services category.

“The reason Flux won is probably a mix of our past success and the future prospects,” Ahn said. “The scientific and enterprise computing community reviewed our technology, and I believe they saw it as future ready in terms of how we support the workflows and how a diverse set of specialty hardware needs to be supported in cloud computing as well as HPC. So, my guess is that all that factored in.”

Flux’s graph approach has positioned the product for the convergence of HPC and cloud computing for the El Capitan system and beyond and fostered the creation of strategic multi-disciplinary collaborations. Under a memorandum of understanding, the Flux team formed a partnership with RedHat OpenShift and the IBM Thomas J. Watson Research Center, which led to the publishing of key findings in two papers: one for the 2021 Smoky Mountain Conference (SMC21), and the other for CANOPIE HPC Workshop at the SC21 supercomputing conference.

The collaborations with RedHat OpenShift and IBM informed Flux’s converged computing directions, setting the stage for the enablement and testing of HPE Rabbit storage. That accomplishment stemmed from scientists at T. J. Watson creating KubeFlux, which used one of the core components of Flux to make intelligent and sophisticated pod placement in Kubernetes, an open-source container orchestration system for automating software deployment, scaling, and management. Pods are the smallest, most basic deployable container objects in Kubernetes.

“As we’ve collaborated more closely with IBM and now RedHat OpenShift partners, we’ve gone several steps further with KubeFlux and more tightly integrated and provided a more feature-rich plugin,” said LLNL computer scientist and Flux team member Dan Milroy. “We take the scheduling component of Flux and plug that into Kubernetes, and then that is an engine that drives these very sophisticated scheduling decisions in terms of where to place pods on hardware resources.”

The Flux team’s SMC21 paper contains a description of the new KubeFlux plugin and details about their background research on Kubernetes and its scheduling framework, which allows third-party plugins to supply scheduling-decision information to Kubernetes.

“We discovered there is a pretty significant limitation in terms of the API that Kubernetes exposes and how third-party plugins can integrate with that, which can result in split-brain states for schedulers,” Milroy said.

Based on the SMC21 paper, the Flux team published in the CANOPIE HPC Workshop, taking their work a step further to include performance studies comparing the performance of GROMACS, one of the most widely used open-source software codes in chemistry, when it’s scheduled either by the Kubernetes default scheduler or by KubeFlux.

The team discovered that KubeFlux makes far more sophisticated and intelligent decisions with respect to where to place pods on resources in contrast with the Kubernetes default scheduler. KubeFlux is enabling a 4× improvement in the performance of the GROMACS application under certain circumstances.

“Part of the reason behind this is that Kubernetes is designed to facilitate and declaratively manage microservices rather than high-performance applications, and now that we’re seeing a movement toward integrating HPC and cloud together, we’re seeing applications that demand more performance, but truly rely on the Kubernetes default scheduler,” Milroy said. “That’s exposing limitations in terms of its decision-making capabilities and its orientation toward microservices rather than toward more high-performance applications. The demand for greater performance is increasing, and this KubeFlux plugin scheduler that we’ve created is designed to meet that demand.”

Forward Motion

Among the next actions for the Flux project is enhancing the software’s system resource manager to ensure the product’s multi-user mode schedules jobs in a manner that gives multiple simultaneous users their fair share of access to system resources based on the amount they requested up front. Working on that aspect are LLNL computer scientists Mark Grondona, Jim Garlick, Al Chu, Chris Moussa, James Corbett, and Ryan Day.

Moussa handles job accounting, which he described as having two forks.

“One is balancing the order of jobs when they’re submitted in multi-user environment, and then there’s just the general administration and management of users in the database,” Moussa said. “So that’s where most of my work is focused, and we’re continuing to make progress on that front in preparation for the system instances of Flux.”

Flux is composed of many communication brokers that create a tree-based overlay network that must remain resilient so that messages can be passed between different parts of the system. Because of that dynamic, many issues in the software design revolve around resiliency.

“If brokers that are supposed to be routing messages go down, a lot of problems happen,” Grondona said. “So, we’re focusing on the initial system instances, keeping the tree-based network simple, and then adding a lot of functionality to deal with exceptional events like brokers going down or other problems happening. We have to be able to keep the system instances running for weeks or months without losing user jobs or data, and so we’re developing a lot of low-level testing to help us achieve those goals.”

A multi-user computing environment makes properly designed and implemented security features critical. The Flux team has learned from experience with many projects that if security-significant bits—i.e., the ones that run with privilege—can be isolated and that layer of code kept small, the occurrence of harmful bugs that could lead to privilege attacks is less likely. A separate Flux security project contains all the security significant code, the main part of which is called IMP, or the individual minister of privilege, a security helper for Flux.

Flux security runs as an unprivileged user across the network, and if nothing bad happens there, no escalation occurs. The IMP process is used during the transition to a user.

“We use cryptographic signatures to ensure that the IMP only runs work that a user has requested,” Grondona said. “And then the plan is to make heavy use of Linux cgroups to isolate different users’ jobs and allocate them using a Linux feature. The users are given only the resources they’re allowed to have. At the system instance level, the plan now is to have every job that’s submitted spin up as a single user of Flux. Everything under that is contained. It’s running as that one user, and they have all the features of Flux within that one job. We feel pretty good about the security design in Flux.”

Flux’s security is designed such that the circuit space of a component that’s running as root in privilege mode is very small.

“So, it drastically lowers the possibility of being compromised, whereas other products that run the whole thing as root can have bad things happen if even a small component of the product gets compromised,” Ahn said.

To propel Flux to its next stage of development, the project’s core team will deploy a superset consisting of a system instance and a single-user mode instance running simultaneously. Then the next part of the plan of record is to replace the existing solution on large Linux clusters in Livermore systems.

“That means that once we get into Livermore systems, the bits will go into our sister laboratories that include Los Alamos and Sandia,” Ahn said. “So that’s one big area. Then we’re going to continue to support single-user mode, where users can use Flux’s single-user mode without waiting for their corresponding center to replace their workload manager with Flux. We’re going to support that mode of operation for a while. But as users use Flux, there will be more and more requests to the center to replace their workload manager. So, I can see two to three years down the road, there’ll be more system instances popping up at other high-end computing sites.”

With respect to cloud computing, the Flux team is in learning mode, researching the challenges and forming strategic R&D collaborations with the aim of pursuing that approach two or three years to find product solutions that can be channeled into the R&D efforts.

“RedHat recently told us they want to place a product around KubeFlux, so that’s going to be another interesting bit,” Ahn said. “And I’m very excited to see what the cloud guys say when KubeFlux is available on the cloud side like Amazon and when they run HPC workloads on Amazon AWS or Microsoft Azure.”

As part of the Flux team’s next big R&D effort, they are preparing a pitch that will offer a new perspective on how scientific applications are mapped to computing resources at large centers like LC. The aim is to counter the decade-old assumption that users can effectively prescribe every small detail concerning how their applications will be mapped to the computing resources at the center.

“Say I have a drug design workflow and some of the components are working really well on CPU-only supercomputers, while other components are working better on GPUs, and then I try to co-schedule those two things simultaneously with precise imperative prescriptions,” Ahn said. “That’s a very difficult thing to do at this point. And even if scientists can live with that kind of mapping complexity, when their recipes come to a center, the center cannot do a good job of mapping for optimization. So, I’m trying to start a project where we change the fundamental ways to map the scientific application to the entire center resources without having to ask and require users to prescribe every little detail.”

If the application mapping project is approved and funded, users will have higher level and more flexible idioms to enable users to describe their resource needs without specifying the supercomputers and nodes to be applied simultaneously or in a staggered way.

Flexibility for the Future  

Application of the descriptive rather than prescriptive approach to application mapping will become even more relevant after the exascale era has been established and the HPC convergence with cloud computing deepens.

“In a cloud software stack, users aren’t asked to prescribe every little detail,” Ahn said. “They select what we call the declarative-style idiom. They want this number of services, and they don’t care where the services are running. The cloud will take care of that. And if this kind of paradigm change is made at the HPC level, our stack will be an inch closer to being more compatible with cloud computing. Cloud computing is huge. It’s like an order of magnitude larger than the HPC market, and we want to make sure HPC software is completely compatible with the cloud, which will be very important for post-exascale.”

The Flux product is well-positioned for the HPC–cloud convergence.

“It’s designed such that it integrates very, very well with, and facilitates resource dynamism,” Milroy said. “Part of that is the hierarchical nature of it, and the other is the geographic resource representation. It turns out that in a cloud environment, resources can change. They can change not only in quantity but also in type and in time. Representing the resources in a graph and then having Flux instances be created hierarchically is extremely conducive to managing cloud-based resources and scheduling cloud resources. And that’s going to be a key component of HPC and cloud convergence in the future, where we see Kubernetes merging even closer together with HPC resource managers.”

 

Flux’s fully hierarchical scheduling is designed to cope with key emerging workload challenges: co-scheduling, job throughput, job communication and coordination, and portability. Credit: LLNL

“To do that, you have to have a resource representation that considers all the flexibility of the cloud, and Flux already enables that, which is a huge advantage,” Milroy said. “One of the Flux subprojects is directed at using Flux to instantiate Kubernetes and then co-manage resources.”

Along with the HPC convergence with the cloud, another expected trend is the era of highly specialized hardware.

“Gone are the days HPC could get its high performance using a few homogeneous compute processors,” Ahn said. “Starting in 2018, new additions to the Top500 list of the most powerful supercomputers drove more performance from specialized hardware, including GPUs, than general-purpose hardware like CPUs. That trend will be accelerated. Part of that is AI. If you look at the current industry, they are making specialized hardware. About 50 startups are working on ASICs, or application-specific integrated circuits, which include AI accelerators. LC has already put accelerators such as Cerebras and SambaNova in place, and this trend will happen more.”

Some of today’s systems apply heterogeneity through the use of multiple partitions containing different specialized hardware.

“One example is Perlmutter at the National Energy Research Scientific Computing Center, NERSC, which has two partitions, each with a different compute hardware type,” Ahn said. “And if you look at European supercomputers, they have a snowflake-like architecture where they have five or six different partitions with a supercomputer. And our users want to use different collections of hardware in their workflows. The mapping of their workflows, which consist of many applications across different specialized partitions and specialized hardware, will be very hard. Flux has enough flexibility, including its graph-based and API-based approaches, to help us overcome what I call this post-exascale crisis.”

Related Content

ECP podcast episode: The Flux Software Framework Manages and Schedules Modern Supercomputing Workflows

Flux: Building a Framework for Resource Management

flux-framework on github.com


The author, Scott Gibson, is a communications specialist for the Exascale Computing Project. This article originally appeared on the ECP website.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire