Flux Supercomputing Workload Manager Hits Milestones in Advance of Supporting Exascale Science

By Scott Gibson

March 2, 2022

High-performance computing, or supercomputing, combined with new data-science approaches such as machine learning and artificial intelligence (AI) give scientists the ability to explore phenomena at levels of detail that would otherwise be impossible or impractical. This ranges from solving the most difficult physics calculations, to designing better drugs for cancer and COVID-19, to optimizing additive manufacturing, and more. Finding answers to those important and timely types of problems and others on supercomputing systems involves the coordination of numerous interconnected computational tasks through complex scientific workflows.

“Nowadays, many science and engineering disciplines require far more applications than ever before in their scientific workflows,” said Dong H. Ahn, a computer scientist at Lawrence Livermore National Laboratory’s (LLNL’s) Livermore Computing (LC). “In many cases, a single job needs to run multiple simulation applications at different scales along with data analysis, in situ visualization, machine learning, and AI.”

Scientific workflow requirements combined with hardware innovations—for example, extremely heterogeneous resources such as GPUs, multitiered storage, AI accelerators, quantum computing of various configurations, and cloud computing resources—are increasingly rendering traditional computing resource management and scheduling software incapable of handling the workflows and adapting to the emerging supercomputing architectures. As this challenging phenomenon has emerged, the LC in-house system software team has been situated at a good vantage point to observe it.

During the course of many years, the team has developed products to run and manage science simulation codes across multiple compute clusters. Among the fruit of their labor is Slurm, which is a workload manager used worldwide. The team realized, however, that today’s workflow management and resource scheduling challenges called for a fundamental rethinking of software design that would transcend conventional solutions.

Overcoming Conventional Limitations

“We all knew our goal was a giant undertaking,” Ahn said. “But our vision for next-gen manager software was compelling enough and well-received by our stakeholders within the Department of Energy [DOE] Advanced Simulation and Computing [ASC] program and then later, the Exascale Computing Project [ECP].”

Computer scientists at LLNL devised an open-source, modular, fully hierarchical software framework called Flux that manages and schedules computing workflows to use system resources more efficiently and provide results faster. Flux’s modular development model enables a rich and consistent API that makes it easy to launch Flux instances from within scripts. Fully hierarchical means that every Flux “job step” can be a full Flux instance, with the ability to schedule more job steps on its resources.

Flux offers the advantages of higher job throughput, better specialization of the scheduler, and portability to different computing environments, yet it manages complex workflows as simply as conventional, or traditional, ones.

Because traditional resource data models are largely ineffective to cope with computing resource heterogeneity, the Flux team established graph-based scheduling to manage complex combinations of extremely heterogeneous resources.

With a graph consisting of a set of vertices and edges, graph-based scheduling puts the relationships between resources on an equal footing, and complex scheduling can be expressed without changing the scheduler code. Consequently, one case in which Flux’s graph approach solves a critical scheduling need is the use of HPE’s Rabbit multi-tiered storage modules on the upcoming exascale-class supercomputer El Capitan at LLNL.

Flux users can push more work through the supercomputing system quicker and spin up their own personalized Flux instance in the system. Additionally, workflows that must run at different sites no longer need to code to each site-specific scheduler—they can instead code to Flux and rely on Flux to handle the nuances of each individual site.

The Flux development team, from top left: Thomas Scogland, Albert Chu, Tapasya Patki, Stephen Herbein, Mark Grondona, Becky Springmeyer, Christopher Moussa, Jim Garlick, Daniel Milroy, Clay England, Michela Taufer (Academic Co-PI), Ryan Day, Dong H. Ahn (PI), Barry Rountree, Zeke Morton, Jae-Seung Yeom, James Corbett. Credit: LLNL

The Impact of Flux

Flux has showcased its innovative characteristics by enabling COVID-19, cancer, and advanced manufacturing projects.

Using Flux, a highly scalable drug design workflow demonstrated the ability to expediently produce potential COVID-19 drug molecules for further clinical testing. The paper documenting that work was one of four finalists for the Gordon Bell Prize at the SC20 supercomputing conference.

In cancer research, simulating RAS protein interactions at the micromolecular-dynamics level is a critical aim of life science and biomedical researchers, because when mutated, RAS is implicated in one third of human cancers. Flux enabled the Multiscale Machine-Learned Modeling Infrastructure (MuMMI) project to successfully execute its complex scientific workflow to simulate RAS on the pre-exascale Sierra supercomputer at LLNL. MuMMI paves the way for a new genre of multiscale simulation for cancer research, coupling multiple scales using a hypothesis-driven selection process.

ECP’s Exascale Additive Manufacturing (ExaAM) project is focused on accelerating the widespread adoption of additive manufacturing by enabling routine fabrication of qualifiable metal parts. By incorporating Flux into a portion of their ExaConstit workflow, the ExaAM team demonstrated a 4× job throughput performance improvement.

“Flux allowed them to bundle their small jobs into a large IBM LSF [short for load sharing facility] resource allocation, and they were able to run them all together with only about five lines of changes in their script,” Ahn said.

To multiply such benefits to a wider range of ECP workflows, the Flux team has also played a major role in ECP ExaWorks project as one of the four founding members. The goal of ExaWorks is to build a Software Development Kit, or SDK, for workflows and a community around common APIs and open-source functional components that can be leveraged individually, or in combination, to create sophisticated dynamic workflows that can leverage exascale-class supercomputers.

Innovation and Collaboration

Flux won a 2021 R&D 100 Award in the Software/Services category.

“The reason Flux won is probably a mix of our past success and the future prospects,” Ahn said. “The scientific and enterprise computing community reviewed our technology, and I believe they saw it as future ready in terms of how we support the workflows and how a diverse set of specialty hardware needs to be supported in cloud computing as well as HPC. So, my guess is that all that factored in.”

Flux’s graph approach has positioned the product for the convergence of HPC and cloud computing for the El Capitan system and beyond and fostered the creation of strategic multi-disciplinary collaborations. Under a memorandum of understanding, the Flux team formed a partnership with RedHat OpenShift and the IBM Thomas J. Watson Research Center, which led to the publishing of key findings in two papers: one for the 2021 Smoky Mountain Conference (SMC21), and the other for CANOPIE HPC Workshop at the SC21 supercomputing conference.

The collaborations with RedHat OpenShift and IBM informed Flux’s converged computing directions, setting the stage for the enablement and testing of HPE Rabbit storage. That accomplishment stemmed from scientists at T. J. Watson creating KubeFlux, which used one of the core components of Flux to make intelligent and sophisticated pod placement in Kubernetes, an open-source container orchestration system for automating software deployment, scaling, and management. Pods are the smallest, most basic deployable container objects in Kubernetes.

“As we’ve collaborated more closely with IBM and now RedHat OpenShift partners, we’ve gone several steps further with KubeFlux and more tightly integrated and provided a more feature-rich plugin,” said LLNL computer scientist and Flux team member Dan Milroy. “We take the scheduling component of Flux and plug that into Kubernetes, and then that is an engine that drives these very sophisticated scheduling decisions in terms of where to place pods on hardware resources.”

The Flux team’s SMC21 paper contains a description of the new KubeFlux plugin and details about their background research on Kubernetes and its scheduling framework, which allows third-party plugins to supply scheduling-decision information to Kubernetes.

“We discovered there is a pretty significant limitation in terms of the API that Kubernetes exposes and how third-party plugins can integrate with that, which can result in split-brain states for schedulers,” Milroy said.

Based on the SMC21 paper, the Flux team published in the CANOPIE HPC Workshop, taking their work a step further to include performance studies comparing the performance of GROMACS, one of the most widely used open-source software codes in chemistry, when it’s scheduled either by the Kubernetes default scheduler or by KubeFlux.

The team discovered that KubeFlux makes far more sophisticated and intelligent decisions with respect to where to place pods on resources in contrast with the Kubernetes default scheduler. KubeFlux is enabling a 4× improvement in the performance of the GROMACS application under certain circumstances.

“Part of the reason behind this is that Kubernetes is designed to facilitate and declaratively manage microservices rather than high-performance applications, and now that we’re seeing a movement toward integrating HPC and cloud together, we’re seeing applications that demand more performance, but truly rely on the Kubernetes default scheduler,” Milroy said. “That’s exposing limitations in terms of its decision-making capabilities and its orientation toward microservices rather than toward more high-performance applications. The demand for greater performance is increasing, and this KubeFlux plugin scheduler that we’ve created is designed to meet that demand.”

Forward Motion

Among the next actions for the Flux project is enhancing the software’s system resource manager to ensure the product’s multi-user mode schedules jobs in a manner that gives multiple simultaneous users their fair share of access to system resources based on the amount they requested up front. Working on that aspect are LLNL computer scientists Mark Grondona, Jim Garlick, Al Chu, Chris Moussa, James Corbett, and Ryan Day.

Moussa handles job accounting, which he described as having two forks.

“One is balancing the order of jobs when they’re submitted in multi-user environment, and then there’s just the general administration and management of users in the database,” Moussa said. “So that’s where most of my work is focused, and we’re continuing to make progress on that front in preparation for the system instances of Flux.”

Flux is composed of many communication brokers that create a tree-based overlay network that must remain resilient so that messages can be passed between different parts of the system. Because of that dynamic, many issues in the software design revolve around resiliency.

“If brokers that are supposed to be routing messages go down, a lot of problems happen,” Grondona said. “So, we’re focusing on the initial system instances, keeping the tree-based network simple, and then adding a lot of functionality to deal with exceptional events like brokers going down or other problems happening. We have to be able to keep the system instances running for weeks or months without losing user jobs or data, and so we’re developing a lot of low-level testing to help us achieve those goals.”

A multi-user computing environment makes properly designed and implemented security features critical. The Flux team has learned from experience with many projects that if security-significant bits—i.e., the ones that run with privilege—can be isolated and that layer of code kept small, the occurrence of harmful bugs that could lead to privilege attacks is less likely. A separate Flux security project contains all the security significant code, the main part of which is called IMP, or the individual minister of privilege, a security helper for Flux.

Flux security runs as an unprivileged user across the network, and if nothing bad happens there, no escalation occurs. The IMP process is used during the transition to a user.

“We use cryptographic signatures to ensure that the IMP only runs work that a user has requested,” Grondona said. “And then the plan is to make heavy use of Linux cgroups to isolate different users’ jobs and allocate them using a Linux feature. The users are given only the resources they’re allowed to have. At the system instance level, the plan now is to have every job that’s submitted spin up as a single user of Flux. Everything under that is contained. It’s running as that one user, and they have all the features of Flux within that one job. We feel pretty good about the security design in Flux.”

Flux’s security is designed such that the circuit space of a component that’s running as root in privilege mode is very small.

“So, it drastically lowers the possibility of being compromised, whereas other products that run the whole thing as root can have bad things happen if even a small component of the product gets compromised,” Ahn said.

To propel Flux to its next stage of development, the project’s core team will deploy a superset consisting of a system instance and a single-user mode instance running simultaneously. Then the next part of the plan of record is to replace the existing solution on large Linux clusters in Livermore systems.

“That means that once we get into Livermore systems, the bits will go into our sister laboratories that include Los Alamos and Sandia,” Ahn said. “So that’s one big area. Then we’re going to continue to support single-user mode, where users can use Flux’s single-user mode without waiting for their corresponding center to replace their workload manager with Flux. We’re going to support that mode of operation for a while. But as users use Flux, there will be more and more requests to the center to replace their workload manager. So, I can see two to three years down the road, there’ll be more system instances popping up at other high-end computing sites.”

With respect to cloud computing, the Flux team is in learning mode, researching the challenges and forming strategic R&D collaborations with the aim of pursuing that approach two or three years to find product solutions that can be channeled into the R&D efforts.

“RedHat recently told us they want to place a product around KubeFlux, so that’s going to be another interesting bit,” Ahn said. “And I’m very excited to see what the cloud guys say when KubeFlux is available on the cloud side like Amazon and when they run HPC workloads on Amazon AWS or Microsoft Azure.”

As part of the Flux team’s next big R&D effort, they are preparing a pitch that will offer a new perspective on how scientific applications are mapped to computing resources at large centers like LC. The aim is to counter the decade-old assumption that users can effectively prescribe every small detail concerning how their applications will be mapped to the computing resources at the center.

“Say I have a drug design workflow and some of the components are working really well on CPU-only supercomputers, while other components are working better on GPUs, and then I try to co-schedule those two things simultaneously with precise imperative prescriptions,” Ahn said. “That’s a very difficult thing to do at this point. And even if scientists can live with that kind of mapping complexity, when their recipes come to a center, the center cannot do a good job of mapping for optimization. So, I’m trying to start a project where we change the fundamental ways to map the scientific application to the entire center resources without having to ask and require users to prescribe every little detail.”

If the application mapping project is approved and funded, users will have higher level and more flexible idioms to enable users to describe their resource needs without specifying the supercomputers and nodes to be applied simultaneously or in a staggered way.

Flexibility for the Future  

Application of the descriptive rather than prescriptive approach to application mapping will become even more relevant after the exascale era has been established and the HPC convergence with cloud computing deepens.

“In a cloud software stack, users aren’t asked to prescribe every little detail,” Ahn said. “They select what we call the declarative-style idiom. They want this number of services, and they don’t care where the services are running. The cloud will take care of that. And if this kind of paradigm change is made at the HPC level, our stack will be an inch closer to being more compatible with cloud computing. Cloud computing is huge. It’s like an order of magnitude larger than the HPC market, and we want to make sure HPC software is completely compatible with the cloud, which will be very important for post-exascale.”

The Flux product is well-positioned for the HPC–cloud convergence.

“It’s designed such that it integrates very, very well with, and facilitates resource dynamism,” Milroy said. “Part of that is the hierarchical nature of it, and the other is the geographic resource representation. It turns out that in a cloud environment, resources can change. They can change not only in quantity but also in type and in time. Representing the resources in a graph and then having Flux instances be created hierarchically is extremely conducive to managing cloud-based resources and scheduling cloud resources. And that’s going to be a key component of HPC and cloud convergence in the future, where we see Kubernetes merging even closer together with HPC resource managers.”

 

Flux’s fully hierarchical scheduling is designed to cope with key emerging workload challenges: co-scheduling, job throughput, job communication and coordination, and portability. Credit: LLNL

“To do that, you have to have a resource representation that considers all the flexibility of the cloud, and Flux already enables that, which is a huge advantage,” Milroy said. “One of the Flux subprojects is directed at using Flux to instantiate Kubernetes and then co-manage resources.”

Along with the HPC convergence with the cloud, another expected trend is the era of highly specialized hardware.

“Gone are the days HPC could get its high performance using a few homogeneous compute processors,” Ahn said. “Starting in 2018, new additions to the Top500 list of the most powerful supercomputers drove more performance from specialized hardware, including GPUs, than general-purpose hardware like CPUs. That trend will be accelerated. Part of that is AI. If you look at the current industry, they are making specialized hardware. About 50 startups are working on ASICs, or application-specific integrated circuits, which include AI accelerators. LC has already put accelerators such as Cerebras and SambaNova in place, and this trend will happen more.”

Some of today’s systems apply heterogeneity through the use of multiple partitions containing different specialized hardware.

“One example is Perlmutter at the National Energy Research Scientific Computing Center, NERSC, which has two partitions, each with a different compute hardware type,” Ahn said. “And if you look at European supercomputers, they have a snowflake-like architecture where they have five or six different partitions with a supercomputer. And our users want to use different collections of hardware in their workflows. The mapping of their workflows, which consist of many applications across different specialized partitions and specialized hardware, will be very hard. Flux has enough flexibility, including its graph-based and API-based approaches, to help us overcome what I call this post-exascale crisis.”

Related Content

ECP podcast episode: The Flux Software Framework Manages and Schedules Modern Supercomputing Workflows

Flux: Building a Framework for Resource Management

flux-framework on github.com


The author, Scott Gibson, is a communications specialist for the Exascale Computing Project. This article originally appeared on the ECP website.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

The Mainstreaming of MLPerf? Nvidia Dominates Training v2.0 but Challengers Are Rising

June 29, 2022

MLCommons’ latest MLPerf Training results (v2.0) issued today are broadly similar to v1.1 released last December. Nvidia still dominates, but less so (no grand sweep of wins). Relative newcomers to the exercise – AI Read more…

NOAA Launches Twin Supercomputers, Tripling Operational Forecasting Capacity

June 29, 2022

In February 2020, the United States’ National Oceanic and Atmospheric Administration (NOAA) announced that it would be procuring two HPE Cray systems, allowing the organization to triple its operational supercomputing Read more…

US Pursues Next-gen Exascale Systems with 5-10x the Performance of Frontier

June 28, 2022

With the Linpack exaflops milestone achieved by the Frontier supercomputer at Oak Ridge National Laboratory, the United States is turning its attention to the next crop of exascale machines, some 5-10x more performant than Frontier. At least one such system is being planned for the 2025-2030 timeline, and the DOE is soliciting input from the vendor community... Read more…

HPE’s New Arm Server Signals Shift in x86 Mindset

June 28, 2022

HPE's early stab at ARM servers close to a decade ago didn't pan out, but the company is hoping the second time is a charm. The company introduced the ProLiant RL300 Gen11 server, which has Ampere's ARM server processor. The one-socket server is designed for cloud-based applications, with the ability to scale up applications in a power efficient... Read more…

What’s New in HPC Research: EXA2PRO, DQRA, and HiCMA-PaRSE Frameworks & More

June 28, 2022

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

AWS Solution Channel

Shutterstock 1413860048

Indivumed Boosts Cancer Research With Powerful Analytics Built on AWS

Hamburg-based Indivumed specializes in using the highest quality biospecimen and comprehensive clinical data to advance research and development in precision oncology. Its IndivuType discovery solution uses AWS to store data and support analysis to decipher the complexity of cancer. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1179306271

Using Cloud-Based, GPU-Accelerated AI to Track Identity Fraud

Consumers use many accounts for financial transactions, ordering products, and social media—a customer’s identity can be stolen using any of these accounts. Identity fraud can happen when setting up or using financial accounts, but it can also occur with communications such as audio, images, and chats. Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Shutterstock 1874021860

The Mainstreaming of MLPerf? Nvidia Dominates Training v2.0 but Challengers Are Rising

June 29, 2022

MLCommons’ latest MLPerf Training results (v2.0) issued today are broadly similar to v1.1 released last December. Nvidia still dominates, but less so (no gran Read more…

NOAA Launches Twin Supercomputers, Tripling Operational Forecasting Capacity

June 29, 2022

In February 2020, the United States’ National Oceanic and Atmospheric Administration (NOAA) announced that it would be procuring two HPE Cray systems, allowin Read more…

US Pursues Next-gen Exascale Systems with 5-10x the Performance of Frontier

June 28, 2022

With the Linpack exaflops milestone achieved by the Frontier supercomputer at Oak Ridge National Laboratory, the United States is turning its attention to the next crop of exascale machines, some 5-10x more performant than Frontier. At least one such system is being planned for the 2025-2030 timeline, and the DOE is soliciting input from the vendor community... Read more…

HPE’s New Arm Server Signals Shift in x86 Mindset

June 28, 2022

HPE's early stab at ARM servers close to a decade ago didn't pan out, but the company is hoping the second time is a charm. The company introduced the ProLiant RL300 Gen11 server, which has Ampere's ARM server processor. The one-socket server is designed for cloud-based applications, with the ability to scale up applications in a power efficient... Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Cerebras Systems Thinks Forward on AI Chips as it Claims Performance Win

June 22, 2022

Cerebras Systems makes the largest chip in the world, but is already thinking about its upcoming AI chips as learning models continue to grow at breakneck speed. The company’s latest Wafer Scale Engine chip is indeed the size of a wafer, and is made using TSMC’s 7nm process. The next chip will pack in more cores to handle the fast-growing compute needs of AI, said Andrew Feldman, CEO of Cerebras Systems. Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

IDC Perspective on Integration of Quantum Computing and HPC

June 20, 2022

The insatiable need to compress time to insights from massive and complex datasets is fueling the demand for quantum computing integration into high performance computing (HPC) environments. Such an integration would allow enterprises to accelerate and optimize current HPC applications and processes by simulating and emulating them on today’s noisy... Read more…

Nvidia R&D Chief on How AI is Improving Chip Design

April 18, 2022

Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…

Royalty-free stock illustration ID: 1919750255

Intel Says UCIe to Outpace PCIe in Speed Race

May 11, 2022

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

The Final Frontier: US Has Its First Exascale Supercomputer

May 30, 2022

In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…

AMD/Xilinx Takes Aim at Nvidia with Improved VCK5000 Inferencing Card

March 8, 2022

AMD/Xilinx has released an improved version of its VCK5000 AI inferencing card along with a series of competitive benchmarks aimed directly at Nvidia’s GPU line. AMD says the new VCK5000 has 3x better performance than earlier versions and delivers 2x TCO over Nvidia T4. AMD also showed favorable benchmarks against several Nvidia GPUs, claiming its VCK5000 achieved... Read more…

Top500: Exascale Is Officially Here with Debut of Frontier

May 30, 2022

The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

Nvidia Launches Hopper H100 GPU, New DGXs and Grace Superchips

March 22, 2022

The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform, CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU... Read more…

PsiQuantum’s Path to 1 Million Qubits

April 21, 2022

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…

Leading Solution Providers

Contributors

ISC 2022 Booth Video Tours

AMD
AWS
DDN
Dell
Intel
Lenovo
Microsoft
PENGUIN SOLUTIONS

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Intel Reiterates Plans to Merge CPU, GPU High-performance Chip Roadmaps

May 31, 2022

Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

Nvidia Dominates MLPerf Inference, Qualcomm also Shines, Where’s Everybody Else?

April 6, 2022

MLCommons today released its latest MLPerf inferencing results, with another strong showing by Nvidia accelerators inside a diverse array of systems. Roughly fo Read more…

Covid Policies at HPC Conferences Should Reflect HPC Research

June 6, 2022

Supercomputing has been indispensable throughout the Covid-19 pandemic, from modeling the virus and its spread to designing vaccines and therapeutics. But, desp Read more…

AMD Lines Up Alternate Chips as It Eyes a ‘Post-exaflops’ Future

June 10, 2022

Close to a decade ago, AMD was in turmoil. The company was playing second fiddle to Intel in PCs and datacenters, and its road to profitability hinged mostly on Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire