Red Hat Joins Forces with DOE Laboratories

June 1, 2022

RALEIGH, N.C., June 1, 2022 — Red Hat, Inc., the world’s leading provider of open source solutions, has announced it is collaborating with multiple U.S. Department of Energy (DOE) laboratories to bolster cloud-native standards and practices in high-performance computing (HPC), including Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, and Sandia National Laboratories.

Adoption of HPC is expanding beyond traditional use cases. Advancements in artificial intelligence, machine learning and deep learning, as well as compute and data-driven analytics, is driving greater interest and need for organizations to be able to run scalable containerized workloads on traditional HPC infrastructure. According to industry analyst firm Hyperion Research, roughly one-third of all HPC system revenue will be dedicated to AI-centric systems by 2025, showing nearly 23% CAGR over the five year period1, driven by the influx of AI workloads. Additionally, nearly 20% of HPC users’ HPC-enabled AI workloads are currently being run in the cloud.2

Red Hat is a leader in cloud-native innovation across hybrid and multicloud environments, while laboratories understand the needs and unique demands of massive-scale HPC deployments. By establishing a common foundation of technology best practices, this collaboration seeks to use standardized container platforms to link HPC and cloud computing footprints, helping to fill potential gaps in building cloud-friendly HPC applications while creating common usage patterns for industry, enterprise and HPC deployments.

Together with the laboratories, Red Hat will focus on advancing four specific areas that address current gaps and help lay the groundwork for exascale computing, including standardization, scale, cloud-native application development, and container storage. Examples of collaborative projects between Red Hat and DOE laboratories includes:

Bringing standard container technologies to HPC

Red Hat and the National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab recognize the importance of standard-based solutions in enabling computing innovation, especially when technologies must span from the edge to the cloud to HPC environments. From container security to scaling containerized workloads, common, accepted practices help HPC sites to get the most from container technologies. To better meet the unique requirements for large scale HPC systems and pave the way for organizations to be able to take advantage of containers in exascale computing, Red Hat and NERSC are collaborating on enhancements to Podman, a daemonless container engine for developing, managing and running container images on a Linux system, to enable it to replace NERSC’s custom development runtime, Shifter.

Running Kubernetes at massive scale

Red Hat has been collaborating with Sandia National Laboratories on the SuperContainers project for several years, working to make Linux containers and other building blocks of cloud-native computing more readily accessible to supercomputing operations. In this expanded collaboration, Red Hat and Sandia National Laboratories intend to explore the deployment scenarios of Kubernetes-based infrastructure at extreme scale, providing easier, well-defined mechanisms for delivering containerized workloads to users.

Bridging traditional HPC jobs with cloud-native workloads

Red Hat and Lawrence Livermore National Laboratory are collaborating to bring HPC job schedulers, such as Flux, to Kubernetes through a standardized programmatic interface helping IT teams supporting supercomputing operations to better manage traditional parallel workflows alongside containerized jobs, including how this mix of technologies operates with low-level hardware devices, like accelerators or high-speed networks.

Reimagining storage for containers

For containers to be used effectively across both HPC and commercial cloud resources, a set of standard interfaces is needed in order to manage various container image formats and for providing access to distributed file systems. Red Hat and the three DOE National Laboratories aim to define the mechanisms by which container images can be migrated from and deployed with other container engines, allowing users to freely move their applications across popular container runtime platforms, as well as create mechanisms that allow containers to use distributed file systems as persistent storage.

Through the collaboration and Red Hat’s experience supporting some of the most powerful supercomputers in the world, HPC sites will be able to abstract the immense complexities their environments can present, benefiting the range of United States exascale machines being deployed by DOE.

Supporting Quotes

Chris Wright, senior vice president and chief technology officer, Red Hat
“The HPC community has served as the proving ground for compute-intensive applications, embracing containers early on to help deal with a new set of scientific challenges and problems. That led to the lack of standardization across various HPC sites creating barriers to building and deploying containerized applications that can effectively span large-scale HPC, commercial and cloud environments, while also taking advantage of emerging hardware accelerators. Through our collaboration with leading laboratories, we are working to remove these barriers, opening the door to liberating next-generation HPC workloads.”

Earl Joseph, Ph.D., chief executive officer, Hyperion Research
“High performance computing infrastructure must adapt to the requirements of today’s heterogeneous workloads, including workloads that use containers. Red Hat’s partnership with the DOE labs is designed to allow the new generation of HPC applications to run in containers at exascale while utilizing distributed file system storage, providing a strong example of collaboration between industry and research leaders.”

Shane Canon, senior engineer, Lawrence Berkeley National Laboratory
“The collaboration with the Podman community and Red Hat engineers is helping us to explore and co-develop enhancements that will allow Podman to scale and perform for the largest HPC workloads. We have already demonstrated this across 512 GPU nodes on Perlmutter. NERSC sees a convergence of HPC and cloud-native workloads, and Podman can be an important tool in helping to bridge between these two worlds.”

Bronis R. de Supinski, chief technology officer, Lawrence Livermore National Laboratory
“High performance computing infrastructure is becoming more diverse and is increasingly being used to run non-traditional HPC workflows. We need to provide mechanisms for scheduling various types of workflows and expect container orchestration frameworks like Kubernetes and Red Hat OpenShift to be a significant part of the software ecosystem effectively contributing to the convergence of the HPC and cloud realms.”

Andrew J. Younge, Ph.D., R&D manager and computer scientist, Sandia National Laboratories
“Sandia and the DOE are seeing an increased need to support more diverse HPC workloads, beyond traditional batch-based modeling and simulation codes. This requires us to find new and innovative ways to enabling services, tasks, and data persistence models together within tight coordination with current simulation capabilities. Furthermore, workload portability remains an important consideration where containers are now a key component to our code deployment strategy. Sandia’s collaboration with Red Hat on Podman and Kubernetes-based OpenShift enables us to investigate approaches for delivering modeling and simulation capabilities as a service to Sandia’s designer and analyst communities.”

Notes

1 Source: Hyperion Research, “Worldwide HPC-based Artificial Intelligence (AI) MarketForecast, 2020-2025

2 Source: Hyperion Research, “HPC and Containers — An Intriguing Combination

About Red Hat, Inc.

Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.


Source: Red Hat, Inc.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Q&A with ORNL’s Bronson Messer, an HPCwire Person to Watch in 2022

August 12, 2022

HPCwire presents our interview with Bronson Messer, distinguished scientist and director of Science at the Oak Ridge Leadership Computing Facility (OLCF), ORNL, and an HPCwire 2022 Person to Watch. Messer recaps ORNL's journey to exascale and sheds light on how all the pieces line up to support the all-important science. Also covered are the role... Read more…

TACC Simulations Probe the First Days of Stars, Black Holes

August 12, 2022

The stunning images produced by the James Webb Space Telescope and recent supercomputer-enabled black hole imaging efforts have brought the early days of the universe quite literally into sharp focus. Researchers from th Read more…

Google Program to Free Chips Boosts University Semiconductor Design

August 11, 2022

A Google-led program to design and manufacture chips for free is becoming popular among researchers and computer enthusiasts. The search giant's open silicon program is providing the tools for anyone to design chips, which then get manufactured. Google foots the entire bill, from a chip's conception to delivery of the final product in a user's hand. Google's... Read more…

Argonne Deploys Polaris Supercomputer for Science in Advance of Aurora

August 9, 2022

Argonne National Laboratory has made its newest supercomputer, Polaris, available for scientific research. The system, which ranked 14th on the most recent Top500 list, is serving as a testbed for the exascale Aurora system slated for delivery in the coming months. The HPE-built Polaris system (pictured in the header) consists of 560 nodes... Read more…

US CHIPS and Science Act Signed Into Law

August 9, 2022

Just a few days after it was passed in the Senate, the U.S. CHIPS and Science Act has been signed into law by President Biden. In a ceremony today, Biden signed and lauded the ambitious piece of legislation, which over t Read more…

AWS Solution Channel

Shutterstock 1519171757

Running large-scale CFD fire simulations on AWS for Amazon.com

This post was contributed by Matt Broadfoot, Senior Fire Strategy Manager at Amazon Design and Construction, and Antonio Cennamo ProServe Customer Practice Manager, Colin Bridger Principal HPC GTM Specialist, Grigorios Pikoulas ProServe Strategic Program Leader, Neil Ashton Principal, Computational Engineering Product Strategy, Roberto Medar, ProServe HPC Consultant, Taiwo Abioye ProServe Security Consultant, Talib Mahouari ProServe Engagement Manager at AWS. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1689646429

Gain a Competitive Edge using Cloud-Based, GPU-Accelerated AI KYC Recommender Systems

Financial services organizations face increased competition for customers from technologies such as FinTechs, mobile banking applications, and online payment systems. To meet this challenge, it is important for organizations to have a deep understanding of their customers. Read more…

12 Midwestern Universities Team to Boost Semiconductor Supply Chain

August 8, 2022

The combined stressors of Covid-19 and the invasion of Ukraine have sent every major nation scrambling to reinforce its mission-critical supply chains – including and in particular the semiconductor supply chain. In the U.S. – which, like much of the world, relies on Asia for its semiconductors – those efforts have taken shape through the recently... Read more…

Q&A with ORNL’s Bronson Messer, an HPCwire Person to Watch in 2022

August 12, 2022

HPCwire presents our interview with Bronson Messer, distinguished scientist and director of Science at the Oak Ridge Leadership Computing Facility (OLCF), ORNL, and an HPCwire 2022 Person to Watch. Messer recaps ORNL's journey to exascale and sheds light on how all the pieces line up to support the all-important science. Also covered are the role... Read more…

Google Program to Free Chips Boosts University Semiconductor Design

August 11, 2022

A Google-led program to design and manufacture chips for free is becoming popular among researchers and computer enthusiasts. The search giant's open silicon program is providing the tools for anyone to design chips, which then get manufactured. Google foots the entire bill, from a chip's conception to delivery of the final product in a user's hand. Google's... Read more…

Argonne Deploys Polaris Supercomputer for Science in Advance of Aurora

August 9, 2022

Argonne National Laboratory has made its newest supercomputer, Polaris, available for scientific research. The system, which ranked 14th on the most recent Top500 list, is serving as a testbed for the exascale Aurora system slated for delivery in the coming months. The HPE-built Polaris system (pictured in the header) consists of 560 nodes... Read more…

US CHIPS and Science Act Signed Into Law

August 9, 2022

Just a few days after it was passed in the Senate, the U.S. CHIPS and Science Act has been signed into law by President Biden. In a ceremony today, Biden signed Read more…

12 Midwestern Universities Team to Boost Semiconductor Supply Chain

August 8, 2022

The combined stressors of Covid-19 and the invasion of Ukraine have sent every major nation scrambling to reinforce its mission-critical supply chains – including and in particular the semiconductor supply chain. In the U.S. – which, like much of the world, relies on Asia for its semiconductors – those efforts have taken shape through the recently... Read more…

Quantum Pioneer D-Wave Rings NYSE Bell, Begins Life as Public Company

August 8, 2022

D-Wave Systems, one of the early quantum computing pioneers, has completed its SPAC deal to go public. Its merger with DPCM Capital was completed last Friday, and today, D-Wave management rang the bell on the New York Stock Exchange. It is now trading under two ticker symbols – QBTS and QBTS WS (warrant shares), respectively. Welcome to the public... Read more…

Supercomputer Models Explosives Critical for Nuclear Weapons

August 6, 2022

Lawrence Livermore National Laboratory (LLNL) is one of the laboratories that operates under the auspices of the National Nuclear Security Administration (NNSA), which manages the United States’ stockpile of nuclear weapons. Amid major efforts to modernize that stockpile, LLNL has announced that researchers from its own Energetic Materials Center... Read more…

SEA Changes: How EuroHPC Is Preparing for Exascale

August 5, 2022

Back in June, the EuroHPC Joint Undertaking – which serves as the EU’s concerted supercomputing play – announced its first exascale system: JUPITER, set to be installed by the Jülich Supercomputing Centre (FZJ) in 2023. But EuroHPC has been preparing for the exascale era for a much longer time: eight months... Read more…

Nvidia R&D Chief on How AI is Improving Chip Design

April 18, 2022

Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…

Royalty-free stock illustration ID: 1919750255

Intel Says UCIe to Outpace PCIe in Speed Race

May 11, 2022

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

The Final Frontier: US Has Its First Exascale Supercomputer

May 30, 2022

In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…

US Senate Passes CHIPS Act Temperature Check, but Challenges Linger

July 19, 2022

The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…

Top500: Exascale Is Officially Here with Debut of Frontier

May 30, 2022

The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

PsiQuantum’s Path to 1 Million Qubits

April 21, 2022

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…

Leading Solution Providers

Contributors

ISC 2022 Booth Video Tours

AMD
AWS
DDN
Dell
Intel
Lenovo
Microsoft
PENGUIN SOLUTIONS

Exclusive Inside Look at First US Exascale Supercomputer

July 1, 2022

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Intel Reiterates Plans to Merge CPU, GPU High-performance Chip Roadmaps

May 31, 2022

Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

AMD Lines Up Alternate Chips as It Eyes a ‘Post-exaflops’ Future

June 10, 2022

Close to a decade ago, AMD was in turmoil. The company was playing second fiddle to Intel in PCs and datacenters, and its road to profitability hinged mostly on Read more…

Exascale Watch: Aurora Installation Underway, Now Open for Reservations

May 10, 2022

Installation has begun on the Aurora supercomputer, Rick Stevens (associate director of Argonne National Laboratory) revealed today during the Intel Vision event keynote taking place in Dallas, Texas, and online. Joining Intel exec Raja Koduri on stage, Stevens confirmed that the Aurora build is underway – a major development for a system that is projected to deliver more... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire